Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26253 | 03-03-2020 08:12 AM | |
| 16406 | 02-28-2020 10:43 AM | |
| 4718 | 12-16-2019 12:59 PM | |
| 4473 | 11-12-2019 03:28 PM | |
| 6664 | 11-01-2019 09:01 AM |
02-01-2017
04:40 PM
@ChrisEns I searched for that error in Google and there are quite a few hits for that exact error and a few different tips and solutions. Usually the issue is due to a malformed /etc/hosts file. That would be a good place to check first... make sure you have at least: 127.0.0.1 localhost and nothing else "odd" Regards, Ben
... View more
01-28-2017
02:58 PM
@Kshitij Shrivastava, The "Timed out after 600 secs Container killed by the ApplicationMaster" message indicates that the application master did not see any progress in the Task for 10 minutes (default timeout) so the Application Master killed it. The question is what was the task doing so that no progress was detected. I'd recommend looking at the application logs for clues about what the task was doing when it was killed. Use the Resource Manager UI or command line like this to get the logs: yarn logs -applicationId <application ID> <options> Regards, Ben
... View more
01-20-2017
01:12 PM
Hi @mmgraph, not sure, but the following might help: https://docs.continuum.io/anaconda/cloudera http://blog.cloudera.com/blog/2016/02/making-python-on-apache-hadoop-easier-with-anaconda-and-cdh/ Ben
... View more
01-18-2017
04:46 PM
@Vjarry, Sorry you hit this issue. I am not certain what happened to Cloudera Manager 5.7.3, but it was likely pulled due to a critical issue. In your situation, then, there is a way to work around this as long as you are OK with the commands that are running being killed without completing. If so, you can run the following with the CM 5.9: # server cloudera-scm-server force_start This tells Cloudera Manager to forcably terminate any running commands and start. If you have any questions or concerns about the commands that are still running, you can post them here as that information should be visible in the error. Note that the error also mentions using force_start (as you should see in your /var/log/cloudera-scm-server/cloudera-scm-server.log. Most times it is OK to kill the commands as your cluster is down anyway, but proceed with caution. We can't tell if it is safe without seeing the commands that are running. Regards, Ben
... View more
01-06-2017
09:58 AM
Thanks for the background. When you say "All of the examples I have seen of using HUE..", what examples are those? I have never seen those instructions, so perhaps they are specific to some sort of network or VM configuration. An SSH tunnel or reverse proxy is only required if your network requires it. Otherwise Hue just listens on a port. As long as the host:port are accessible on the network, you don't need anything fancy. Cheers, Ben
... View more
01-06-2017
09:11 AM
Hi @tonypiazza, Could you clarify a bit more regarding the need of a proxy? Hue does not require a proxy, so are you talking about something else on your network and accessing it in a particular type of network configuraiton? Regards, Ben
... View more
01-04-2017
12:50 PM
1 Kudo
Indeed, you are correct about the columns. Please see my recent posts in the following thread for some more information: Mismatched CDH versions: host has NONE but role expect 5 I outlined a proposed fix (expiremental) if you really need the JDK. Cloudera engineering is reviewing to decide on how we will address it in future releases. Cheers, Ben
... View more
01-04-2017
09:41 AM
Note on my last message: The keytab file referenced in the kadmin command issued by Cloudera Manager is removed by CM after the generate credentials is run. So, you'll need to remove the "-k -t /var/run/cloudera-scm-server/cmf2028852611455413307.keytab" part of the kadmin command so you can be promted for the password.
... View more
01-04-2017
09:39 AM
No problem... The attempt to manually generate credentials fails when the "kadmin" command fails: kadmin -k -t /var/run/cloudera-scm-server/cmf2028852611455413307.keytab -p root/admin@REALM.COM -r REALM.COM -q 'addprinc -maxrenewlife "432000 sec" -randkey solr/<<my_ipaddress>>@REALM.COM' add_principal: Operation requires ``add'' privilege while creating "solr/<<my_ipaddress>>@REALM.COM". This means that the user that the principal that Cloudera Manager used to execute the kadmin command did not have privilege to add the principal. You can try running the same kadmin command from the command line on the Cloudera Manager host to see if you get the same error. If you do, make sure that the kadm5.acl is correct. Also note that the generate credentials process must create the principals; it cannot import existing principals created in the KDC. -Ben
... View more
01-04-2017
09:08 AM
1 Kudo
Cloudera Manager will create the necessary keytabs automatically when adding a service to a Kerberos-enabled cluster. Based on your exception when attempting to manually generate the crednetials, the user you have configured as your Cloudera Manager Principal is not an admin (does not have 'get' privilege to create principals). Please see the information here: http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_s3_cm_principal.html Note that when using MIT KDC, admin access is defined in /var/kerberos/krb5kdc/kadm5.acl See this for more info: https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/kadm5_acl.html To give any principal with "/admin" all privileges, you could use the following: */admin@REALM.COM * After you have made sure you have a user created in the KDC (cloudera-scm/admin@REALM.COM for example), you can import those credentials as described here: http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_s4_kerb_wizard.html#concept_ann_x5y_l4 Lastly, hosts always need to have a valid fully-qualified Domain Name (FQDN). When you redacted information in your principal, I see you mention: "solr/<<my_ipaddress>>@REALM.COM" All principals in your CDH cluster should have the format of "name/FQDN@REALM" For instance: solr/solrhost.example.com@EXAMPLE.COM Make sure all of your cluster hosts resolve their FQDNs via forward and reverse DNS. I hope that helps. Ben
... View more