Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2689 | 11-01-2016 05:43 PM | |
| 9116 | 11-01-2016 05:36 PM | |
| 5027 | 07-01-2016 03:20 PM | |
| 8432 | 05-25-2016 11:36 AM | |
| 4604 | 05-24-2016 05:27 PM |
02-23-2016
10:34 AM
@Ujwala Sawant 1) See this official article https://community.hortonworks.com/articles/4321/hive-acid-current-state.html You can see that we don't recommend ACID 2) Based on 1, there is a chance that it's a bug "Caused by: java.lang.ArrayIndexOutOfBoundsException:-1"
... View more
02-23-2016
10:26 AM
@Shivaji See this https://streever.atlassian.net/wiki/pages/viewpage.action?pageId=4390918 Option #1 - Thru the Scripts run in the established connection So now that you have a pool of AM's running, for possibly multiple queue's, who do you get your JDBC session into HS2 to use the desired queue? You need to specify the queue you want in your JDBC session with: Specify Queue in Script
-- (Preferred) But less known property, not so well documented is required for HDP 2.1 . 1
set tez.queue.name=alt;
-- For HDP 2.1 . 3 and above, this property will work.
set mapreduce.job.queuename=alt;
Option #2 - Modify the JDBC URL String to include the queue Specify a Queue in the JDBC Connection String
jdbc:hive2: //localhost:10000?tez.queue.name=alt
Option #3 - Set the Default Queue for Hive Server2 On the server that's running Hive Server2, modify hive-site.xml the include the following: hive-site.xml
< property >
< name >mapreduce.job.queuename</ name >
< value >alt</ value >
</ property >
OR tez-site.xml
< property >
< name >tez.queue.name</ name >
< value >alt</ value >
</ property >
... View more
02-23-2016
10:19 AM
@Prakash Punj Hi Prakash, This feature is not there but you can fullfil your requirement by looking at RM UI and active jobs can give you data on active users running jobs in that time.
... View more
02-23-2016
10:06 AM
1 Kudo
@nejm hadj For admin user http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_views_guide/content/_setup_HDFS_user_directory.html
Connect to a host in the cluster that includes the HDFS client. Switch to the hdfs system account user. su - hdfs
Using the HDFS client, make an HDFS directory for the user. For example, if your username is admin, you would create the following directory. hadoop fs -mkdir /user/admin
Set the ownership on the newly created directory. For example, if your username is admin, you would make that user the directory owner. hadoop fs -chown admin:hadoop /user/admin
... View more
02-23-2016
10:05 AM
1 Kudo
@nejm hadj That's good progress!!! Now, Please do this login as user root in your machine su - hdfs hdfs dfs -chown -R root:hdfs /nifi exit This should fix permission issue.
... View more
02-23-2016
10:00 AM
@Margus Kiting I have accepted this answer. Thanks for the final feedback!
... View more
02-23-2016
09:47 AM
@jzhang That's exactly my point that REST access will from the client so you should not worry about the kerberos ticket issue
... View more
02-23-2016
09:41 AM
@Roberto Sancho See this thread http://stackoverflow.com/questions/11074385/multilevel-json-in-pig
... View more
02-23-2016
06:20 AM
@jzhang You will be running curl from the client node and hdfs from edge or master node.
... View more