Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23045 | 03-03-2020 08:12 AM | |
13398 | 02-28-2020 10:43 AM | |
3875 | 12-16-2019 12:59 PM | |
3495 | 11-12-2019 03:28 PM | |
5293 | 11-01-2019 09:01 AM |
02-05-2016
03:12 PM
What happens or what error occurs when you run "show databases" in the Hue Impala App?
... View more
12-23-2015
11:07 PM
It may be the cause of the failure is not an OutOfMemoryError excpetions. I believe you solved the issue in another post by opening up permissions in /var/ as the event server files could not be created. If not, let us know.
... View more
12-21-2015
09:42 AM
1 Kudo
When using Cloudera Manager to start the daemons, Cloudera Manager creates a command to be run and tells the host on which the service will start to hearbeat into Cloudera Manager. The agent heartbeat response from Cloudera Manager will tell the agent that a service needs to start. It is up to the agent, then to start the process. If you see no /var/run/... directory being created, that means that something likely went wrong before that was supposed to occur. Some places to check Cloudera Manager. If the command failed to be processed due to a timeout you should see an error in the Command itself. What do you see there? Cloudera Manager log (/var/log/cloudera-scm-server.log ) What happens in the log when you try to start the daemon? Agent log (/var/log/cloudera-scm-agent). On the host where the Impala daemon should be running, check the agent log for clues. Did the agent even know that it should start the service? Are there errors or excpetions during hearbeat attempts? That's a good place to start.
... View more
12-16-2015
09:02 PM
You need to allow Cloudera Manager to detect and download the parcel from a repository. Please review the instructions here if you cannot use the public repository to download the parcels: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_ig_create_local_parcel_repo.html Cheers, Ben
... View more
09-29-2015
04:13 PM
1 Kudo
Hi Andy, The problem is that your browser is going to attempt an HTTP GET when the "deactivate" command requires a POST. I recommend using curl or some other tool to issue the command as a post. for example... curl -X POST ... Ben
... View more
09-01-2015
09:22 AM
Can you clarify what problem you are seeing? If CM cannot communcate with the agent, verify that you can use curl or similar tool to use the hostname for that host shown in Cloudera Manager to connect to port 9000 on the remote node. Cloudera Manager must be able to do a heartbeat request to that agent's host on port 9000 in order for that host to be considered healthy. If the agent cannot heartbeat to Cloudera Manager, then verify that the "server_host" setting in /etc/cloudera-scm-agent/config.ini shows Cloudera Manager's hostname and that you can connect from the agent's host to Cloudera Manager's host on port 7182 (this is the port that the Agent uses to send heartbeats to CM). In general, the hosts file should appear with: IP FQDN hostname If you make any changes in the agent configuration or the hosts file, use this to restart: service cloudera-scm-agent restart If the above check out and the hostnames being reported by your agents are accessible to Cloudera Manager, then the hostnames should be less relevant at this stage (until you get to kerberos, ssl, etc. If you are having a problem, please make sure to include exactly what you are seeing and a snippet from the log file if possible CM: /var/log/cloudera-scm-server/cloudera-scm-server.log Agent: /var/log/cloudera-scm-agent/cloudera-scm-agent.log Two very common factors that can block communication between CM and Agent are firewall (iptables) and selinux. As far as I know selinux is not installed by default on Ubuntu. Ben
... View more
08-28-2015
09:23 AM
1 Kudo
Hello. The clock NTP health check is executed by each agent running on nodes on your cluster. The command executed is: ntpdc -np A timeout of 2 seconds is used, so if the ntp client does not return in 2 seconds, the health check will fail. If there is a result, then the agent script will parse the result text and return a result metric that includes the clock offset. this will be sent to the Host Monitor Management Service for processing. You have 2 options here: If you are convinced there are no problems, you can turn off the Cloudera Manager Server Clock Offset Thresholds health check or adjust it as necessary in the Cloudera Manager management services. Or, if you wish to troubleshoot, check the /var/log/cloudera-scm-agent/cloudera-scm-agent.log file for clues. Search in that file for "ntpdc". If there are any errors running the command, a stack trace will be provided. The agent merely parses the ntpdc output, so assuming your output looks something like this: ntpdc -np remote local st poll reach delay offset disp ======================================================================= *132.163.4.101 10.17.81.194 1 1024 377 0.02972 0.001681 0.13664 =198.55.111.5 10.17.81.194 2 1024 377 0.01395 0.002177 0.13667 =50.116.55.65 10.17.81.194 2 1024 377 0.07263 0.001220 0.12172 The script will look for a line that starts with an "*" character. So, in our example: *132.163.4.101 10.17.81.194 1 1024 377 0.02972 0.001681 0.13664 Then, it will get the 'offset' column. This value is returned to the Host Monitor which, will pull the metric and filter it through your health check configuration to decide if it warrants an alert. Lastly, I'm not aware that anything has changed in the offset health check between CM 5.3 and 5.4, so I would recommend troubleshooting this to try to figure out why clock is offset. Timing is important in hadoop, so it is worth a look. Regards, Ben
... View more
08-23-2015
09:22 AM
If this solution did not work for you, what have you tried to do to fix the issue? The " Exhausted available authentication methods" exception indicates that there is a misconfiguration on the host whereby the user specified cannot authenticate via SSH. Try using SSH from the command line of another host and use the same user that you are entering in the installation wizard to attempt to authenicate to one of the nodes you are attempting to add to the cluster. - if you are using "root" then ssh as "root" is disabled by default on some OSes... you might check that. - if you are using a non-root user, then verify that sudo, you might need to configure your sudoers with something like "userid ALL=(ALL)NOPASSWD:ALL"
... View more
08-23-2015
09:14 AM
If the solution did not work for you, we'll need more information before we can help. What are you trying to do, what goes wrong (include specific log messages if possible) and what have you tried to do to fix the issue? The " Exhausted available authentication methods" exception indicates that there is a misconfiguration on the host whereby the user specified cannot authenticate via SSH. -Ben
... View more
08-05-2015
08:56 AM
Hadoop in general expects that your hostnames and domain names are all lowercase. When Kerberos is introduced, this becomes important. While it is possible to override this behavior (of expecting lowercase) by doing manual configuration, I recommend ensuring via /etc/hosts or DNS that your host and domain are lower case. After that is corrected, regenerate credentials and that should correct the problem. Regards, Ben
... View more