Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26253 | 03-03-2020 08:12 AM | |
| 16406 | 02-28-2020 10:43 AM | |
| 4718 | 12-16-2019 12:59 PM | |
| 4473 | 11-12-2019 03:28 PM | |
| 6664 | 11-01-2019 09:01 AM |
01-01-2017
10:34 AM
3 Kudos
Hello, You mention that you are installing 5.6, but, based on the fact that the error is occurring in code that does not exist in version 5.6 and the exception shows "cm5.8.1", it seems you are actually installing with Cloudera Manager 5.8.1. The fact that you are seeing the following error: [25/Jul/2016 16:22:00 +0000] 7453 MainThread downloader ERROR Failed rack peer update: [Errno 111] Connection refused Means that when the agent attempts to connet to its peers on port 7191, it cannot connect. This could be due to various reasons including firewall, routing, maybe a transient failure in one of the agents. The error is occurring during the processing of the Cloudera Manager heartbeat response that includes a list of "peers" that the agent can use to download parcels from. If you have a small number of hosts, and cannot resolve the port/connection issue, you can revert to the old method of parcel download which would only download from Cloudera Manager (rather than leveraging the peer download feature). To do so: * in Cloudera Manager, choose "All Hosts" from the "Hosts" tab. * Click the Configuration button on the right of the page. * Search for "P2P Parcel Distribution Port" * Set "P2P Parcel Distribution Port" to "0" * Save I believe you need to restart the agents with "service cloudera-scm-agent restart" in order for them to pick up on the change. After doing that, you should be able to proceed.
... View more
01-01-2017
09:16 AM
When you mention you have the same problem, what is the exact error you are getting? As for the original issue in this post, we see two items that can cause issues for kerberos in Hadoop: (1) hosts with no domains (even .local would do) (2) Capital letters on hostnames. You have this configured the hostname with all capitals: impala/CLOUDERAVALM1@REALM.COM In order to have the best chance of getting kerberos to work, I would recommend verifying the following: (1) All hosts have Fully-qualified domain names. For instance, "hostname" should return the hostname and "hostname -f" should return the FQDN. (2) If relying on the hosts file for resolution, make sure that you are using the following format: IP FQDN HOSTNAME For example: 10.0.0.2 myhost.example.com myhost (3) Make sure you use only uppercase host names. Hadoop is sensitive to this at the moment. Though technically valid, it will cause problems for sure. (4) Ensure all hosts can resolve eachother with forward and reverse DNS (with FQDN). I think the main problem you are facing is dealing with the uppercase hostnames without a domain. It'll work fine without Kerberos involved, but when intoducing Kerberos, the rules change a bit to support that method of authentication. After you make the network changes, make sure to regenerate credentials for all roles so that the correct principals are created. I hope this is a good start. Regards, Ben
... View more
12-29-2016
10:22 AM
@zhuw.bigdata, I opened two internal Cloudera Jiras to make sure we specify that the fully-qualified domain name be used if Kerberos is enabled in the cluster. One Jira targeted the description in the HA wizard, the other Jira focused on the steps listed in our the documentation. Thanks for bringing this up! Cheers, Ben
... View more
12-29-2016
09:09 AM
@saranvisa, You provided the right information, but I wanted to clarify that the correct step to update the Account Manager credentials was to again import credentials. Thanks for providing the solution! Ben
... View more
12-29-2016
09:06 AM
1 Kudo
In older versions of Cloudera Manager (4.x I believe), the keytab file used to be stored in /etc/cloudera-scm-server as "cmf.keytab". Now, it is stored in Cloudera Manager's database. To create or update the KDC account manager in Cloudera Manager, you can reference this documentation: http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_deploy_keytab_s5.html
... View more
12-29-2016
08:47 AM
1 Kudo
Actually, this message may be showing us the cause. Due to the fact that the interesting bit was off the page when I was looking at your Zookeeper snippet, I couldn't see it at first: 2016-12-26 20:13:48,322 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1593c8f09d800f6 type:create cxid:0x2 zxid:0x1b2ea txntype:-1 reqpath:n/a Error Path:/solr Error:KeeperErrorCode = NoNode for /solr Zookeeper appears to have no znode for /solr To create one, in Cloudera Manager, Go to Solr. Next, click on the Actions button (drop-down) on the far right and choose "Initialize Solr" This will create the /solr znode. After that, try starting Solr again. Regards, Ben
... View more
12-29-2016
08:36 AM
I would verify resolution of the host to IP and make sure you can make a connection to zookeeper from the host on which you are trying to start Solr. If you are not seeing any error messages in Zookeeper logs, that is a basic indicator that a connection to the zookeeper could not be established. First thing to check is if you can connect from the same host, using the same IP/port, to the server.
... View more
12-29-2016
07:37 AM
For the impyla issue, I believe Git is a good place to look for assistance too. I see there is already a discussion in Git: https://github.com/cloudera/impyla/issues/233
... View more
12-28-2016
11:36 PM
2 Kudos
A0: While using LDAP as a "unified account system", Cloudera recommends against leveraging LDAP Group Mapping. I'll repost the Note on the page you mentioned: Important: Cloudera strongly recommends against using Hadoop's LdapGroupsMapping provider. LdapGroupsMapping should only be used in cases where OS-level integration is not possible. Production clusters require an identity provider that works well with all applications, not just Hadoop. Hence, often the preferred mechanism is to use tools such as SSSD, VAS or Centrify to replicate LDAP groups. The idea is to allow tools that were designed for unix account integration with LDAP/Active Directory, etc. You could enable LDAP Groups Mapping for HDFS, but only HDFS would know about users/groups. The OS would not know about them. A1: Yes, each host should have the same set of users. Two common methods of managing this (without having to manually update every host's passwd and group files: - Tools such as SSSD, VAS, and Centrify allow hosts to retrieve user information from one location. As long as each host in the cluster is configured to use the tool, each host can find a singular entry in LDAP (hdfs user for instance) - Puppet, Chef, or other automation tools can be used to push out passwd/group changes to all hosts. A2: No. There is no "syncing" for LDAP Groups Mapping; rather, there is one LDAP entry that services will reference. A3: By default, Cloudera Manager has "Create Users and Groups, and Apply File Permissions for Parcels" enabled. When the parcel is activated, the agents on each host managed by that Cloudera Manager will create local users and groups if that setting is enabled. It won't create them in LDAP, though. A4: I'm affraid I don't understand the question completely, so I'll answer generally. As long as your client has the proper configuration and credentials to authenticate, it should be able to work. I hope that all helps. Regards, Ben
... View more
12-24-2016
09:34 AM
The problem occurs when trying to set the scheme to http for the Solr znode in Zookeeper: name-node-1:2181/solr Make sure that Zookeeper is running and that "name-node-1" can be resolved to an IP. You might check the zookeeper log to see if there were any errors. If not, it is more likely that a connection could not be made to zookeeper to update the scheme property. Regards, Ben
... View more