Member since
07-31-2013
98
Posts
54
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2987 | 11-23-2016 07:37 AM | |
3107 | 05-18-2015 02:04 PM | |
5263 | 05-13-2015 07:33 AM | |
4029 | 05-12-2015 05:36 AM | |
4364 | 04-06-2015 06:05 AM |
04-06-2015
06:05 AM
1 Kudo
There are 2 options here: 1. If using CM, put a Hive gateway on every NodeManager/TaskTracker in the cluster, if not using CM, put the hive-site.xml in /etc/hive/conf on every NodeManager/TaskTracker. 2. Add "export HIVE_CONF_DIR=`pwd`" to the top of your shell script and then sqoop shoudl check the local directory for the hive-site.xml.
... View more
04-01-2015
06:21 AM
1 Kudo
There is an issue with hive where HS2 has problems renewing tokens after the renewal period and it throws the error you are seeing. If you restart the Hive service, Hue will probably start working again for a day or two. You can alleviate this by adding the following to the HMS and HS2 safety valves: <property> <name>hive.cluster.delegation.key.update-interval</name> <value>31536000000</value> </property> <property> <name>hive.cluster.delegation.token.renew-interval</name> <value>31536000000</value> </property> <property> <name>hive.cluster.delegation.token.max-lifetime</name> <value>31536000000</value> </property> This will set the token renewal to 1 year, so this will only happen once a year. Hope this helps.
... View more
03-16-2015
06:47 AM
2 Kudos
Hey, To share your workflows in Hue, do the following: 1. Go to your home page in Hue by clicking on the little house icon in the upper left to the right of "HUE". 2. Select the Project that contains the workflow, probably "default". 3. Here you can share anything with others, workflows, queries etc. Find what you want to share and click on the "Sharing" icon on the far right of the page, looks like a few people standing together. 4. Then enter the user/group you want to share with and the perms you want to give them.
... View more
03-16-2015
06:12 AM
2 Kudos
Hey, You can't have a kill action send an email. However, you can have your other actions change their kill to be an email action that sends the email and then the email action goes to a kill. Like: <workflow-app name="shell-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell"/> <action name="shell"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell.sh</exec> <file>shell.sh#shell.sh</file> </shell> <ok to="end"/> <error to="kill-email"/> </action> <action name="kill-email"> <email xmlns="uri:oozie:email-action:0.1"> <to>cconner@test.com</to> <cc>other-cconner@test.com</cc> <subject>WF ${wf:id()} failed</subject> <body>Shell action failed.</body> </email> <ok to="kill"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app>
... View more
03-06-2015
06:08 AM
With a kerberized cluster, your connect string needs to include the HS2 server principal, so: jdbc:hive2://cdh53-1.qa.test.com:10000/default;principal=hive/cdh53-1.qa.test.com@TEST.COM replace "cdh53-1.qa.test.com" with your fully qualified host and domain name. Replace TEST.COM with the correct REALM. Any time kerberos is in place, you must use the hostname and fully qualified domain name instead of localhost or hostname as kerberos checks depend on the FQDN. Same is true of the Sentry server in your safety valve configuration, use the FQDN instead of localhost.
... View more
03-02-2015
11:59 AM
1 Kudo
In your shell action, go to "Files" and click "Add path" and then browse to your shell script in HDFS. Then save and run it again and see if that helps. If it does not help, try removing the "#!/...." line at the top of the script and see if that helps.
... View more
02-13-2015
05:59 AM
1 Kudo
This right here works: <workflow-app name="shell-impala-invalidate-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell-impala-invalidate"/> <action name="shell-impala-invalidate"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell-impala-invalidate.sh</exec> <file>shell-impala-invalidate.sh#shell-impala-invalidate.sh</file> <file>shell-impala-invalidate.sql#shell-impala-invalidate.sql</file> <file>cconner.keytab#cconner.keytab</file> </shell> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> Script: #!/bin/bash LOG=/tmp/shell-impala-invalidate-$USER.log ls -alrt > $LOG export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner /usr/bin/klist -e >> $LOG impala-shell -f shell-impala-invalidate.sql NOTE: the <file> tag puts that file on the local file system where the impala-shell is going to run, so the file is indeed local for the -f flag and it goes in "PWD/<whatever is after #>". For example: <file>test.sql#/test1/test.sql</file> Then test.sql will be found in: PWD/test1/test.sql And: <file>test.sql#test.sql</file> Then test.sql will be found in: PWD/test.sql And the shell script and the keytab are also in "PWD" because of the file tags. I would do the following in your shell script to get some more insight: #!/bin/bash LOG=/tmp/shell-impala-invalidate-$USER.log ls -alrtR > $LOG #This will show you all the files in the directory and their relative paths export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner /usr/bin/klist -e >> $LOG hadoop fs -put $LOG /tmp #put the log file in HDFS to find it easily impala-shell -f shell-impala-invalidate.sql NOTICE the "ls -lartR" and the "hadoop fs" command, this way you can easily grab the log file from HDFS and see what files are actually there.
... View more
02-12-2015
06:47 AM
The method you did was correct and should have worked. Did you try a command like: impala-shell -f ./file.sql
... View more
02-11-2015
10:34 AM
1 Kudo
Glad to hear that worked! Hue doesn't currently support Framed Thrift to Hbase. It's on the roadmap, but not sure when.
... View more
02-11-2015
05:45 AM
1 Kudo
Go to the http://<hueserver>:<port>/desktop/dump_config and in the "Configuration Sections and Variables" section go to hbase and confirm that "hbase_clusters" points to the correct thrift server. If that's not correct, confirm in the Hue service-wide config that you have the Hbase service selected. If that is correct, then go to the Hbase service configuration and check to see if this property is enabled, "Enable HBase Thrift Server Framed Transport". If it is, try unchecking it and restarting the Hbase thrift server and see if that works.
... View more