Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19088 | 03-03-2020 08:12 AM | |
10272 | 02-28-2020 10:43 AM | |
3087 | 12-16-2019 12:59 PM | |
2382 | 11-12-2019 03:28 PM | |
4175 | 11-01-2019 09:01 AM |
06-27-2018
08:48 PM
Hi @balusu, Actually, the error in your log snippet is: 18/06/28 02:20:56 INFO util.KerberosName: No auth_to_local rules applied to exampleuser@example.com. This error occurs when no rules in your "hadoop.security.auth_to_local" property in the server's core-site.xml matched the principal, "exampleuser@example.com" This is not a kerberos error; rather, this is a message being returned by hadoop code when hadoop tries to map your principal to a unix user name. Generally, if you are attempting to act on a hadoop service with a user who is not in the hadoop cluster's Kerberos realm, you need to make sure that the hadoop.security.auth_to_local property includes rules that will match the principal and convert the string to just a username. Cloudera Manager will create such rules for you if you add the other realm to the "Trusted Realms" or "Trusted Kerberos Realms" configuration. see: https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_sg_kerbprin_to_sn.html Note that you will need to deploy client configuration and restart the cluster after making this change. -Ben
... View more
06-21-2018
12:33 PM
@PeterLuo, The "config.zip" error can be ignored. It is expected due to a cosmetic bug that is fixed in Cloudera Manager 5.13 and on. First thing we need to know is how you know the region server did not start. What are you seeing when you try to start HBase? Also, is it just one Region Server or are other HBase roles also not starting? The best place to start troubleshooting a Cloudera Manager initiated start of a role is to review: - the agent server logs on the host where the region server failed to start - the stderr.log and stdout.log files for the process will give clues about any issues the supervisor is having starting the process. Here is the general process of how a service starts: - You click start in CM - CM tells the agent to heartbeat - the agent sends a heartbeat to CM - CM replies with a heartbeat response - Agent compares what it has running with what CM says should be running (and decides what to do to match what CM says) - Agent retrieves the files necessary to start the process from CM and lays down the files - Agent signals the supervisor process - Supervisor checks to see if processes need to stop/start - If starting, the supervisor will execute CM shell scripts to start the process - Once the shell is complete, the process runs as a child process of the supervisor. Hopefully that helps clarify the process so you can start troubleshooting. The process's stdout.log file (in the process directory's logs directory) is a good place to start. You can view them in Cloudera Manager by going to the role's status page and clicking the "Log Files" drop-down.
... View more
06-20-2018
10:19 AM
1 Kudo
@proxim, Cloudera Manager is a graphical user interface intended for use in a browser. I recommend you try connecting in a browser. Alternatively, you can use an API call to determine whether the server is responding: # curl -u admin:admin <cm_host>:<cm_port>/api/version This should return the version number of the CM API supported by CM.
... View more
06-20-2018
10:06 AM
@prabhat10, I'm sorry that I couldn't reply sooner. The DnsTest command is what Cloudera Manager runs to check your host and canonical host names. The problem is that your hostname and canonical names are not the same: "hostname": "instance-1", "canonicalname": "instance-1.c.sacred-evening-197206.internal My guess is this has to do with your /etc/hosts file or general DNS resolution Have you defined the host in /etc/hosts? If so, make sure it is in the following format: IP FQDN The FQDN name must be first
... View more
06-20-2018
09:42 AM
@sorabhj412, As described here, the "replications" api call is only available starting with API v11: https://cloudera.github.io/cm_api/apidocs/v19/path__clusters_-clusterName-_services_-serviceName-_replications.html You should be able to get this to work if you change your URL to v11 or as high as your CM supports: https://hostname:7183/api/v11/clusters/cluster/services/hdfs/replications/ To see what the max version of the API your Cloudera Manager supports, run: curl -su <user> https://hostname:7183/api/version NOTE: you also seem to have a space character after your "cluster" cluster name: "clusters/cluster%20/" I believe that may also cause you problems. To see what cluster name you have, check with: curl -su <user> https://hostname:7183/api/v11/clusters
... View more
06-13-2018
09:15 AM
@don1123 API calls cannot use SAML for authentication, so "local" database login will need to occur in order for your authentication to succeed. This means you need a user/password created in Cloudera Manager to be able to utilize it via the API. You can test your "local" (non-SAML) authentication in Cloudera Manager by navigating to the following URL: http://cm_host:cm_port/cmf/localLogin This will bypass SAML authentication and allow you to log in as a user who is in the Cloudera Manager database.
... View more
06-12-2018
08:54 AM
@don1123, The most direct explanation is that your user and password strings fail authentication. Can you check to make sure that you can log in with "admin" as your username and password via the Cloudera Manager User Interface? An HTTP 401 is "authentication required" and that is what you would get if the user/pass was incorrect.
... View more
05-30-2018
09:59 AM
@jquevedo, No problem. The only answer we can give without understanding the details of the proposed use cases is to say "it depends". It depends on how the hadoop cluster realm and user realms are configured and it depends on what client OS they are using to access hadoop resources.
... View more
05-29-2018
12:55 PM
Hi @jquevedo, On linux/unix, the configuration is the same. On Windo What operating systems are your users on and how are they accessing hadoop? (Web Browser, command line, third party tool?) Are your users logging into a domain (if they are on Windows?). If so, is that domain's realm the same one as your hadoop realm? In order for us to answer your questions more precisely, we'll need to understand your intended use case. Basically, though, there is no difference for how a client is configured and what requirements there are. Kerberos is a protocol, so, regardless of the type of server, clients' access to the KDC should be relatively the same.
... View more
05-21-2018
10:09 AM
hi @prabhat10, Since this is an old thread, we should make sure you are indeed seeing the same issue that is described. Please let us know what you are doing when you see the problem and what the problem is. If you are seeing the message that is in the initial comment of this thread, then to get some more insight, I recommend running the DNSTest manually like this on a cluster host: (make sure java is in your path or specify the full path to the java file) # java -classpath /usr/share/cmf/lib/agent-5.*.jar com.cloudera.cmon.agent.DnsTest The output will be a JSON format file like this: {"status": "0", "ip": "122.168.100.211", "hostname": "host.example.com", "canonicalname": "host.example.com", "localhostDuration": "4", "canonicalnameDuration": "0" } This might shed some light on what is going on. I'd check your hosts file or DNS depending on how your host resolution is configured on that host. Regards, Ben
... View more