Member since
09-24-2014
29
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13286 | 04-25-2015 09:56 AM | |
8512 | 03-07-2015 03:03 PM | |
4003 | 03-01-2015 10:25 AM |
07-30-2015
08:39 AM
Find current process number for Hue service. As a root user navigate to directory /var/run/cloudera-scm-agent/process/ and find subdirectory XX-hue-HUE_SERVER with the highest value, i.e. “61-hue-HUE_SERVER” Inside XX-hue-HUE_SERVER directory, confirm that TTL value has been added to hue_safety_valve.ini file. # grep ttl /var/run/cloudera-scm-agent/process/{PROCESS_NUMBER}-hue-HUE_SERVER/hue_safety_valve.ini ttl=900
... View more
07-14-2015
01:01 PM
Great, thank you for clarification. I didn't realize it logs me out even if I remain active. Good to know!
... View more
07-14-2015
12:24 PM
This is for period of inactivity. If you are actively using HUE, you won't be logged off. In various scenarios - like compliant and/or secure clusters - it would be required to set up automated timeout for idle users.
... View more
04-25-2015
09:56 AM
I was looking further into this and it appears the problem comes up when I first try loading data to newly created table, and throws: Fetching results ran into the following error(s): java.io.IOException: java.io.IOException: HTTP status [500], message [Internal Server Error]
... View more
04-25-2015
08:56 AM
Connected with beeline 1: jdbc:hive2://{hostname_of_hive_server}:10> select * from students;
+-----------------+---------------+---------------+--+
| students.sname | students.age | students.gpa |
+-----------------+---------------+---------------+--+
+-----------------+---------------+---------------+--+
No rows selected (0.802 seconds)
1: jdbc:hive2://{hostname_of_hive_server}:10> select * from sales limit 5;
Error: java.io.IOException: java.io.IOException: HTTP status [500], message [Internal Server Error] (state=,code=0)
... View more
04-25-2015
08:21 AM
Using CDH 5.3, with Kerberos and TLS enabled when we got to testing loading data, noticed that connection to Hive Metastore fails. Cloudera Manager is not indicating any issues with principals and their keytabs. What may I be missing here? 2015-04-25 11:02:26,197 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge20S.java:724) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge20S.java:721) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge20S.java:721) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 10 more 2015-04-25 11:02:27,200 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge20S.java:724) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge20S.java:721) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge20S.java:721) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 10 more
... View more
04-17-2015
04:45 PM
Thank you! We got this to work finally. Now just need to wire to LDAP. It looks like the keystore passed by following the instructions on Cloudera's site wasn't used for some reason. Following instructions here: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_sg_create_key_trust.html#concept_u35_w2m_l4_unique_1 The upshot is that the keystore at /usr/java/jdk1.7.0_67-cloudera/jre/lib/security/jssecacerts (or ${JAVA_HOME}/lib/security/jssecacerts, where $JAVA_HOME is the home of the version of Java used by Cloudera Navigator, we used "ps" to find out where) should contain the root certificate. We then restarted both Cloudera Navigator services, and able to navigate to https:// Integration with FreeIPA was sort of confusing initially. Thanks for your help in understanding the mechnisms of this functionality.
... View more
04-16-2015
03:40 PM
The the CA certificate that issued the certificate, imported into the truststores I've discussed already, establishes inherent trust within java services, for SSL certificates created by that CA. We have already created a truststore on Namenode (node where Cloudera Manager is installed), when TLS was set up for all agents, that's what was used - /etc/cloudera-scm-server/keystore - and this file was copied to all nodes in Cloudera Hadoop cluster, including Navigator.
... View more
04-16-2015
12:21 PM
"Trust" is established differently between the two implementations, Navigator, being Java based, will derive trust through the default JDK mechanisms I pointed out We are looking into whether we need to use keytool utility to generate those or use our FreeIPA server to generate certs for Navigator...
... View more
04-15-2015
10:45 PM
So far it appears that only Navigator is unhappy with the keystore. So I believe TLS/SSL was set up correctly, otherwise. We are using FreeIPA as certificate authority and below is a quick overview of steps taken to set it up, since there is a somewhat deviation from standard protocol. On the Namenode (same host the Cloudera Manager lives on), I generated a certificate and key to be used by Cloudera Manager # kinit -kt /etc/krb5.keytab # ipa-getcert request -f cmhost.pem -k cmhost.key -r # chmod 600 cmhost* Then I copied the newly created cm-keys directory to each host. $ for x in {LIST_OF_CDH_HOSTS}; do scp -r cm-keys $x:; done $ for x in {LIST_OF_CDH_HOSTS}; do ssh -tty $x sudo bash -c "'mkdir -p /opt/cloudera/security/x509; mv cm-keys/* /opt/cloudera/security/x509; chown cloudera /opt/cloudera/security/x509/*'"; done Next, I set up Puppet to configure Cloudera to use TLS.
... View more