Member since
07-31-2013
98
Posts
54
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2423 | 11-23-2016 07:37 AM | |
2106 | 05-18-2015 02:04 PM | |
4454 | 05-13-2015 07:33 AM | |
3150 | 05-12-2015 05:36 AM | |
3653 | 04-06-2015 06:05 AM |
04-17-2019
09:43 AM
It's a bit confusing and we should fix the docs. You need the Mysql 5.1 libraries availble, but you can have Mysql 5.7 installed. There is a compat rpm you can install instead. Search for something like this in Yum or your rpms: mysql-community-libs-compat-5.7.16-1.el7.x86_64.rpm
... View more
11-23-2016
07:37 AM
Unfortunately there is no way to provide super user access to an entire group today. It must be done one user at a time. There is a feature request to add this in a future release. You could do it programatically to make it a little easier, see the section "How to make a certain user a Hue admin" http://gethue.com/password-management-in-hue/ You could create a list of users and iterate through them in hue shell. Make sure to set HUE_CONF_DIR=/var/cloudera-scm-agent/process/id-hue-HUE_SERVER where id is most recent. ON CDH 5.5 and above you also have to set: export HUE_IGNORE_PASSWORD_SCRIPT_ERRORS=1 export HUE_DATABASE_PASSWORD=huedatabasepassword Hope this helps.
... View more
12-03-2015
09:22 AM
Hue sends queries to HiveServer2, so you would want to implement it in the CM configuration for HS2, which is this safety valve in the Hive service: HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml
... View more
05-18-2015
02:14 PM
1 Kudo
Great, just wanted to make sure we took care of any problems you might have been running into:-). Did that answer your question?
... View more
05-18-2015
02:04 PM
Hey, The way it works is the Hue launcher job acts like the Hive CLI and then spawns the Hive MR job. So you will never see an actual instance of hive spin up, you'll just see a launcher MR job and then a hive MR job. Are you having an issue or just curious?
... View more
05-18-2015
07:01 AM
Jira filed:-) https://issues.cloudera.org/browse/HUE-2749 Right now Hue requires both Impala and HS2 to be configured for Ldap, so ldap_username and ldap_password are global. There is also a feature request out there to make those specific for Impala and HS2 so that would be less of an issue in the future.
... View more
05-18-2015
06:46 AM
Ahhh, that is why:-). Those properties: [desktop] ldap_username ldap_password Tell Hue to use ldap when authenticating against Impala and HS2. So if you're not using ldap for Impala and HS2, then you can remove those and you don't need ldap for Impala.
... View more
05-18-2015
06:42 AM
Oh, sorry, meant which ldap properties did you have set for Hue? Mainly, did you have: [desktop] ldap_username= ldap_password=
... View more
05-18-2015
06:29 AM
Just curious, what ldap properties did you have set for Hue? Hue being configured for Ldap should not force Impala to require Ldap.
... View more
05-13-2015
07:33 AM
1 Kudo
Did you include: HADOOP_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf" Before your "hadoop jar" command? IE: HADOOP_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf" hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx2048m' -Djava.security.auth.login.config=/home/user/clouderaSearch/test_collection/jaas.conf --log4j /opt/cloudera/parcels/CDH/share/doc/search-1.0.0+cdh5.4.0+0/examples/solr-nrt/log4j.properties --morphline-file /home/user/clouderaSearch/test_collection/test_morphline.conf --output-dir hdfs://nameservice1:8020/data/test/output_solr --verbose --go-live --zk-host zookeeper_host:2181/solr --collection test_collection hdfs://nameservice1:8020/data/test/incoming Notice how "HADOOP_OPTS" and "hadoop jar" are on the same line, that is how they need to be.
... View more
05-12-2015
05:36 AM
You can do the following: 1. Spawn the Hue shell: export HUE_CONF_DIR="/var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk '{print $9}'`"
cd /opt/cloudera/parcels/CDH/lib/hue (or /usr/lib/hue if using packages)
./build/env/bin/hue shell 2. Paste the following python in the shell replacing <username> with the user you want to become superuser: from django.contrib.auth.models import User
a = User.objects.get(username='<username>')
a.is_staff = True
a.is_superuser = True
a.save()
... View more
04-06-2015
06:05 AM
1 Kudo
There are 2 options here: 1. If using CM, put a Hive gateway on every NodeManager/TaskTracker in the cluster, if not using CM, put the hive-site.xml in /etc/hive/conf on every NodeManager/TaskTracker. 2. Add "export HIVE_CONF_DIR=`pwd`" to the top of your shell script and then sqoop shoudl check the local directory for the hive-site.xml.
... View more
04-01-2015
06:21 AM
1 Kudo
There is an issue with hive where HS2 has problems renewing tokens after the renewal period and it throws the error you are seeing. If you restart the Hive service, Hue will probably start working again for a day or two. You can alleviate this by adding the following to the HMS and HS2 safety valves: <property> <name>hive.cluster.delegation.key.update-interval</name> <value>31536000000</value> </property> <property> <name>hive.cluster.delegation.token.renew-interval</name> <value>31536000000</value> </property> <property> <name>hive.cluster.delegation.token.max-lifetime</name> <value>31536000000</value> </property> This will set the token renewal to 1 year, so this will only happen once a year. Hope this helps.
... View more
03-16-2015
06:47 AM
2 Kudos
Hey, To share your workflows in Hue, do the following: 1. Go to your home page in Hue by clicking on the little house icon in the upper left to the right of "HUE". 2. Select the Project that contains the workflow, probably "default". 3. Here you can share anything with others, workflows, queries etc. Find what you want to share and click on the "Sharing" icon on the far right of the page, looks like a few people standing together. 4. Then enter the user/group you want to share with and the perms you want to give them.
... View more
03-16-2015
06:12 AM
2 Kudos
Hey, You can't have a kill action send an email. However, you can have your other actions change their kill to be an email action that sends the email and then the email action goes to a kill. Like: <workflow-app name="shell-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell"/> <action name="shell"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell.sh</exec> <file>shell.sh#shell.sh</file> </shell> <ok to="end"/> <error to="kill-email"/> </action> <action name="kill-email"> <email xmlns="uri:oozie:email-action:0.1"> <to>cconner@test.com</to> <cc>other-cconner@test.com</cc> <subject>WF ${wf:id()} failed</subject> <body>Shell action failed.</body> </email> <ok to="kill"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app>
... View more
03-06-2015
06:08 AM
With a kerberized cluster, your connect string needs to include the HS2 server principal, so: jdbc:hive2://cdh53-1.qa.test.com:10000/default;principal=hive/cdh53-1.qa.test.com@TEST.COM replace "cdh53-1.qa.test.com" with your fully qualified host and domain name. Replace TEST.COM with the correct REALM. Any time kerberos is in place, you must use the hostname and fully qualified domain name instead of localhost or hostname as kerberos checks depend on the FQDN. Same is true of the Sentry server in your safety valve configuration, use the FQDN instead of localhost.
... View more
03-02-2015
11:59 AM
1 Kudo
In your shell action, go to "Files" and click "Add path" and then browse to your shell script in HDFS. Then save and run it again and see if that helps. If it does not help, try removing the "#!/...." line at the top of the script and see if that helps.
... View more
02-13-2015
05:59 AM
1 Kudo
This right here works: <workflow-app name="shell-impala-invalidate-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell-impala-invalidate"/> <action name="shell-impala-invalidate"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell-impala-invalidate.sh</exec> <file>shell-impala-invalidate.sh#shell-impala-invalidate.sh</file> <file>shell-impala-invalidate.sql#shell-impala-invalidate.sql</file> <file>cconner.keytab#cconner.keytab</file> </shell> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> Script: #!/bin/bash LOG=/tmp/shell-impala-invalidate-$USER.log ls -alrt > $LOG export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner /usr/bin/klist -e >> $LOG impala-shell -f shell-impala-invalidate.sql NOTE: the <file> tag puts that file on the local file system where the impala-shell is going to run, so the file is indeed local for the -f flag and it goes in "PWD/<whatever is after #>". For example: <file>test.sql#/test1/test.sql</file> Then test.sql will be found in: PWD/test1/test.sql And: <file>test.sql#test.sql</file> Then test.sql will be found in: PWD/test.sql And the shell script and the keytab are also in "PWD" because of the file tags. I would do the following in your shell script to get some more insight: #!/bin/bash LOG=/tmp/shell-impala-invalidate-$USER.log ls -alrtR > $LOG #This will show you all the files in the directory and their relative paths export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner /usr/bin/klist -e >> $LOG hadoop fs -put $LOG /tmp #put the log file in HDFS to find it easily impala-shell -f shell-impala-invalidate.sql NOTICE the "ls -lartR" and the "hadoop fs" command, this way you can easily grab the log file from HDFS and see what files are actually there.
... View more
02-12-2015
06:47 AM
The method you did was correct and should have worked. Did you try a command like: impala-shell -f ./file.sql
... View more
02-11-2015
10:34 AM
1 Kudo
Glad to hear that worked! Hue doesn't currently support Framed Thrift to Hbase. It's on the roadmap, but not sure when.
... View more
02-11-2015
05:45 AM
1 Kudo
Go to the http://<hueserver>:<port>/desktop/dump_config and in the "Configuration Sections and Variables" section go to hbase and confirm that "hbase_clusters" points to the correct thrift server. If that's not correct, confirm in the Hue service-wide config that you have the Hbase service selected. If that is correct, then go to the Hbase service configuration and check to see if this property is enabled, " Enable HBase Thrift Server Framed Transport". If it is, try unchecking it and restarting the Hbase thrift server and see if that works.
... View more
01-21-2015
10:55 AM
2 Kudos
Hey, Currently there is not an Impala action, so you must use a shell action that calls impala-shell. The shell script that calls impala-shell must also include an entry to set the PYTHON EGGS location. Here is an example shell script: #!/bin/bash export PYTHON_EGG_CACHE=./myeggs /usr/bin/kinit -kt cconner.keytab -V cconner impala-shell -q "invalidate metadata" NOTICE the PYTHON_EGG_CACHE, this is the location you must set or the job will fail. This also does a kinit in the case of a kerberized cluster. Here is the workflow that goes with that script: <workflow-app name="shell-impala-invalidate-wf" xmlns="uri:oozie:workflow:0.4"> <start to="shell-impala-invalidate"/> <action name="shell-impala-invalidate"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>shell-impala-invalidate.sh</exec> <file>shell-impala-invalidate.sh#shell-impala-invalidate.sh</file> <file>cconner.keytab#cconner.keytab</file> </shell> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> You must include the <file> tag with the shell script, but not the keytab part unless you are using kerberos. Hope this helps. Thanks Chris
... View more
04-10-2014
09:54 AM
Hey Murthy, I think the issue is that HBase does not allow hive to impersonate users. So you'll need to setup hive as a proxy user in HBase. Can you try the following: - Go to the HDFS service configuration in CM. - Go to Service-Wide->Advanced and add the following to " Cluster-wide Configuration Safety Valve for core-site.xml": <property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property> - Then restart Hbase. By default, CM does not add the hive proxuser config to hbase, that's why you see the errors you are seeing. Let me know if you have any questions. Thanks Chris
... View more
04-04-2014
01:34 PM
That's right, you'll have to put the following in your Hue Service->Configuration->Hue Server->Advanced-> Hue Server Configuration Safety Valve for hue_safety_valve_server.ini: [hadoop] [[mapred_clusters]] [[[default]]] jobtracker_host=cdh45-2.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt [[[jtha]]] jobtracker_host=cdh45-1.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt Leaving off "secuirty_enabled=true" if you're not using kerberos.
... View more
04-04-2014
01:14 PM
This is also fixed in CDH 4.6 and there is a Patched version of CDH 4.5 that resolves this as well. You can open a support ticket to get the patched version of 4.5 if you are running CDH 4.5. You would have to add a new parameters "logical_name", for example: [hadoop] [[mapred_clusters]] [[[default]]] jobtracker_host=cdh45-2.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt [[[jtha]]] jobtracker_host=cdh45-1.qa.test.com thrift_port=9290 jobtracker_port=8021 submit_to=true hadoop_mapred_home={{HADOOP_MR1_HOME}} hadoop_bin={{HADOOP_BIN}} hadoop_conf_dir={{HADOOP_CONF_DIR}} security_enabled=true logical_name=logicaljt Hope this helps.
... View more
03-21-2014
11:41 AM
1 Kudo
Hey Andrew, Can you try adding "oozie.libpath=/application/lib" to your job.properties and see if that helps? Thanks Chris
... View more
03-21-2014
09:21 AM
1 Kudo
Hey Andrew, Where are you storing the jars that you need in the distributed cache? Are they in the "${oozie.wf.application.path}/lib" or another location in HDFS? Thanks Chris
... View more
03-04-2014
05:04 AM
2 Kudos
Hey Gerd, The reason you can't get to the NameNode UI is because you have to have a kerberos ticket now to bring it up. As for the failed canary, did you restart the MGMT services? Once you enable kerberos, you have to restart the MGMT services so they can get a proper keytab. That way they can run the canary tests. Can you try restarting the MGMT services and let me know if that helps? Thanks! Chris
... View more