Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19036 | 03-03-2020 08:12 AM | |
10225 | 02-28-2020 10:43 AM | |
3077 | 12-16-2019 12:59 PM | |
2360 | 11-12-2019 03:28 PM | |
4146 | 11-01-2019 09:01 AM |
07-16-2018
09:35 AM
@tjford, Check out the Cloudera Manager logs for any issues. Also, in Cloudera Manager, navigate to: http://cm_host:7180/cmf/commands/commands Find the failed add hosts command there and see if you can discover any details there about the problem. NOTE: The "cm_host_install failed" message is from the example "cluster_set_up.py" and occurs when the host add command fails here: cm.host_install(host_username, host_list, password=host_password, cm_repo_url=cm_repo_url) Make sure you have configured your hosts' username and password. The defaults are "root:cloudera" host_username = "root" host_password = "cloudera" If you haven't changed those variable values then that's likely the cause as the user/pass would not be correct for your hosts.
... View more
07-16-2018
09:05 AM
1 Kudo
@t5, With CDH versions you get a base version of hadoop and then fixes and features on top of that. Upgrading just one or two components is not supported and is likely to cause problems. My suggestion would be to try your app against the latest version of CDH and see if it works. If it doesn't, identify if the "break" is related to something that is specific to 2.7 that you need. The answer is "no," you cannot upgrade to hadoop 2.7 in CDH; however, it is likely that you already have what you need in CDH 5.14. or 5.15.
... View more
07-13-2018
03:50 PM
@DataMike, That should be OK. You can consider what to do with the other roles afterward. Just note that if the impalad or regionserver are used for anything, they will need to read/write to a remote DataNode and they will also be competing with the movement of blocks from the decomissioned dn to a new one.
... View more
07-13-2018
03:21 PM
@DataMike, You don't have to remove any of the roles you mentioned to decommission the DataNode on this host HOWEVER, Impala Daemon and Hbase RegionServer roles do not perform as well if they do not have a local DataNode. You may want to remove those roles at some point. Solr Server I am not certain about. Resource Manager does not need to be moved.
... View more
07-13-2018
09:29 AM
Thanks, @dougspadottoemc! I am just glad you tracked down the problem and can go back to having fun with hadoop 😉
... View more
07-13-2018
09:24 AM
1 Kudo
@lizard, You were super close. In Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini, use the following: [beeswax] hive_server_host= hive_server_port= NOTE that the old beeswax server is no longer part of CDH 5, so you don't need "server_interface" either. Ben
... View more
07-13-2018
08:48 AM
@t5, We need some background about how you are using CDH in order to best answer your question. - Why are you trying to upgrade the hadoop package to 2.7? - Are you using CDH? If so, which version? - Are you using Cloudera Manager? If so, what version? As for your request to upgrade, I assume there is some feature or bug fix that you want to attain. Cloudera uses a base of hadoop 2.6 currently, but we maintain many fixes and features that have come since 2.6 in our CDH release. In that sense, there is no real "hadoop 2.6" version in CDH; rather, it is hadoop 2.6 + a bunch of other stuff. Due to the nature of these fixes, you need to keep your CDH versions consistent across all components. If you have an issue and want to know in what CDH release it is fixed, please let us know. Ben
... View more
07-12-2018
04:38 PM
1 Kudo
@balusu A couple things: (1) Your 'kinit' test shows that your krb5.conf is not configured for hadoop. you have the default linux krb5.conf there. Edit it and comment out the line starting with default_ccache_name Java does not support keyring credentials cache at this time, so Java processes will not have access to it and will fail if MIT kinit was used to create credentials. (2) "ICMP Port Unreachable" is a clear indicator that there the server side cannot access the port being requested. In thsi case, it should be port 88. Make sure your host's /etc/krb5.conf is configured with the realm in the [realms] section correctly. Your realm should have at least one "kdc" like like: kdc = myadkdc.example.com:88 If that is configured, try running a telnet to that port like: # telnet myadkdc.example.com 88 Maybe use wireshark or tcpdump too to debug what is going on...
... View more
07-12-2018
12:47 PM
1 Kudo
@dougspadottoemc, Thank you for supplying all that information. A couple possible causes I can think of: (1) You are using AES256 encryption for kerberos but the JDK you are using is not configured to support AES256. - See if you have encryption types configured in your /etc/krb5.conf - use "klist -kte zookeeper.keytab" to view the encryption types listed in the keytab To check this, try this in Cloudera Manager: - Navigate to Administration --> Security - Click Security Inspector button - When the check completes, check the results and make sure you see: All hosts have Java configured with unlimited-strength encryption If you do not see that and you have limited encryption, that can explain why "kinit" works but Java cann't read the keytab properly. MIT does not have AES256 restrictions. (2) Since you redacted your domain and realm information, I can't tell for sure, but I do recall this type of issue happening in the past when Cloudera Manager had a different REALM configured than the one in the keytabs. I don't think this is it for you, but thought I'd mention it just in case.
... View more