Member since
11-07-2017
13
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1229 | 01-17-2019 03:05 PM | |
2888 | 01-14-2019 08:37 PM | |
6181 | 12-17-2018 08:55 PM |
02-06-2019
10:30 AM
1 Kudo
@Ken
Herring
I'm glad it worked, may be your cluster isn't secured. May you please accept the answer if it worked as it will help others.
... View more
01-21-2019
10:37 AM
@Ken
Herring Did you try the above suggested approach, let me know if you have issues.
... View more
01-18-2019
02:17 PM
1 Kudo
@Ken
Herring
Hive shell has security issues & is deprecated in higher versions of hdp please avoid. Opening a hive/beeline shell for every table it will be slow as it has to spawn a jvm for every table so avoid looping approach. Prepare a file with the table list like below. cat show_partitions_tables.hql show partitions table1; show partitions table2; show partitions table3; Use a -f flag of beeline to pass the above file eg below. beeline --silent=true --showHeader=false --outputformat=csv2 -u "jdbc:hive2://$hive_server2:10000/$hive_db;principal=$user_principal" -n '<username>' -p '<password>' -f show_partitions_tables.hql > show_partitions_tables.txt Note: Please accept the answer if it solved your issue.
... View more
01-17-2019
03:05 PM
@takuma what version of hdp are you in ? docker support for yarn is added in HDP 3.0 below are the links with steps to configure the same. https://hortonworks.com/blog/part-5-of-data-lake-3-0-yarn-and-containerization-supporting-docker-and-beyond/ https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-operating-system/content/run_docker_containers_on_yarn.html https://community.hortonworks.com/content/kbentry/226331/dockerized-yarn-services-quickstart.html Cheers, Naveen
... View more
01-17-2019
10:18 AM
Although this is an old thread just thought it will useful for someone, we can use both in same line not sure if the older version of beeline didn't support this considering this is a old thread. Anyways the below syntax works for me. beeline -u"jdbc:hive2://$hive_server2:10000/$hive_db;principal=$user_principal -e"SHOW DATABASES"
... View more
01-14-2019
08:37 PM
1 Kudo
@Fernando Lopez Bello Based on the above configuration file for queue Zeppelin although user A submits a job first utilizes all the resources as the Queue's minimum-user-limit-percent is set to 20 the queue resources will be shared among subsequent users below is the link which explains with an example. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/setting_user_limits.html But if you don't want all the resources to shared by user A even if there no other users then you can use user-limit-factor below is the link to a nice article about it. I can see user limit factor is 3 for Zeppelin Queue which means each user can utilize 3 times of queue capacity if resources are available & elasticity permits. https://community.hortonworks.com/content/supportkb/49640/what-does-the-user-limit-factor-do-when-used-in-ya.html In a nutshell minimum-user-limit-percent is a soft limit & user-limit-factor is a hard limit.
... View more
12-19-2018
10:11 AM
In hbase you have a rowkey & one column so you mapped rowkey as key & id as id. In your case the value is same for both rowkey & id so you are confused assuming there is only 1 column. Try inserting one more row in hbase with different values & see the difference.
... View more
12-19-2018
10:08 AM
Hi @n c, As you know the column names from flume, map the same in the below line while creating the hive table if column names in hbase col1,col2 etc WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,me_data:col1,me_data:col2...")
... View more
12-17-2018
08:55 PM
1 Kudo
Hi @n c,The column name in hbase table id while mapping the hbase column in hive table you have used as idate, please correct the column name you will able to see the data. Cheers, Naveen
... View more