Member since
07-31-2013
98
Posts
54
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2986 | 11-23-2016 07:37 AM | |
3106 | 05-18-2015 02:04 PM | |
5263 | 05-13-2015 07:33 AM | |
4028 | 05-12-2015 05:36 AM | |
4361 | 04-06-2015 06:05 AM |
01-21-2015
10:55 AM
2 Kudos
Hey, Currently there is not an Impala action, so you must use a shell action that calls impala-shell. The shell script that calls impala-shell must also include an entry to set the PYTHON EGGS location. Here is an example shell script: #!/bin/bash export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner impala-shell -q "invalidate metadata" NOTICE the PYTHON_EGG_CACHE, this is the location you must set or the job will fail. This also does a kinit in the case of a kerberized cluster. Here is the workflow that goes with that script: <workflow-app name="shell-impala-invalidate-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell-impala-invalidate"/> <action name="shell-impala-invalidate"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell-impala-invalidate.sh</exec> <file>shell-impala-invalidate.sh#shell-impala-invalidate.sh</file> <file>cconner.keytab#cconner.keytab</file> </shell> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> You must include the <file> tag with the shell script, but not the keytab part unless you are using kerberos. Hope this helps. Thanks Chris
... View more
04-10-2014
09:54 AM
Hey Murthy, I think the issue is that HBase does not allow hive to impersonate users. So you'll need to setup hive as a proxy user in HBase. Can you try the following: - Go to the HDFS service configuration in CM. - Go to Service-Wide->Advanced and add the following to "Cluster-wide Configuration Safety Valve for core-site.xml": <property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property> - Then restart Hbase. By default, CM does not add the hive proxuser config to hbase, that's why you see the errors you are seeing. Let me know if you have any questions. Thanks Chris
... View more
04-04-2014
01:34 PM
That's right, you'll have to put the following in your Hue Service->Configuration->Hue Server->Advanced->Hue Server Configuration Safety Valve for hue_safety_valve_server.ini: [hadoop] [[mapred_clusters]] [[[default]]] jobtracker_host=cdh45-2.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt [[[jtha]]] jobtracker_host=cdh45-1.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt Leaving off "secuirty_enabled=true" if you're not using kerberos.
... View more
04-04-2014
01:14 PM
This is also fixed in CDH 4.6 and there is a Patched version of CDH 4.5 that resolves this as well. You can open a support ticket to get the patched version of 4.5 if you are running CDH 4.5. You would have to add a new parameters "logical_name", for example: [hadoop] [[mapred_clusters]] [[[default]]] jobtracker_host=cdh45-2.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt [[[jtha]]] jobtracker_host=cdh45-1.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt Hope this helps.
... View more
03-21-2014
11:41 AM
1 Kudo
Hey Andrew, Can you try adding "oozie.libpath=/application/lib" to your job.properties and see if that helps? Thanks Chris
... View more
03-21-2014
09:21 AM
1 Kudo
Hey Andrew, Where are you storing the jars that you need in the distributed cache? Are they in the "${oozie.wf.application.path}/lib" or another location in HDFS? Thanks Chris
... View more
03-04-2014
05:04 AM
2 Kudos
Hey Gerd, The reason you can't get to the NameNode UI is because you have to have a kerberos ticket now to bring it up. As for the failed canary, did you restart the MGMT services? Once you enable kerberos, you have to restart the MGMT services so they can get a proper keytab. That way they can run the canary tests. Can you try restarting the MGMT services and let me know if that helps? Thanks! Chris
... View more
02-18-2014
08:23 AM
No problem! Glad it worked!
... View more