Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19680 | 03-03-2020 08:12 AM | |
10674 | 02-28-2020 10:43 AM | |
3212 | 12-16-2019 12:59 PM | |
2554 | 11-12-2019 03:28 PM | |
4346 | 11-01-2019 09:01 AM |
08-24-2018
06:10 PM
1 Kudo
@DaveO, The exception means that the agent cannot log into the supervisor. For some reason the agent's password is not the same as the running supervisor. I wonder if this host had an agent already running when you tried to install or something like that. To correct this, check for running supervisor processes For example "ps aux |grep supervisor | grep agent" and kill them if they are any When no supervisor processes are running, try starting the agent again and see if it starts/connects to a new supervisor.
... View more
08-24-2018
05:53 PM
2 Kudos
@balaganAtomi, The link is good now. There may have been a brief issue... Let us know if it is still not working for you.
... View more
08-22-2018
11:43 PM
@vijithv, First, firewalls can easily block UDP and allow TCP. I mentioned that was a possible cause. Also, depending on how you have your /etc/krb5.conf configured, a different KDC could have been contacted. You can see distinctly in the failure via UDP that there is a socket timeout for each attempt to connect to the KDC. This is a failure at the networking side where a client cannot connect to a server. Since no connection was ever made via UDP, there was no change for it to know to try TCP. That "switching" is done based on a response of KRB5KRB_ERR_RESPONSE_TOO_BIG I believe so if no response is made, no "switching" to TCP will occur. If you really want to get to the bottom of this, recreate the problem while capturing packets via tcpdump like this: # tcpdump -i any -w ~/kerberos_broken.pcap port 88 Then, with the problem fixed reproduce again while capturing packets: # tcpdump -i any -w ~/kerberos_fixed.pcap port 88 Use Wireshark (it does a great job of decoding Kerberos packets) and you will be able to see the entire interaction. This will show us information to help determine the cause. Wireshark is here: https://www.wireshark.org/
... View more
08-22-2018
11:28 PM
2 Kudos
@pollard, You cannot use the CM API to delete Hue users. I am not certain if the cm_api is compatible with Python 3. If Python is not working for you, you can try using the REST api: https://cloudera.github.io/cm_api/apidocs/v5.15.0/path__users_-userName-.html Here is an example of how to delete a CM user named "deleteme" # curl -u admin:admin -H "Content-Type: application/json" -X DELETE http://cm_host.example.com:7180/api/v19/users/deleteme There is no out-of-the-box way to delete users. You could probably figure out how to use curl, but here is one way that I know of that could be scripted: # JAVA_HOME=/usr/java/jdk1.8.0_152 HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep HUE_SERVER| tail -1` HADOOP_CREDSTORE_PASSWORD=`grep environment $HUE_CONF_DIR/supervisor.conf |sed "s/.*HADOOP_CREDSTORE_PASSWORD='\([^']*\).*/\1/"` /opt/cloudera/parcels/CDH/lib/hue/build/env/bin/hue shell << EOF > from django.contrib.auth.models import User > user = User.objects.get(username="wgnmaxgcuj") > user.delete() > EOF In the above, use your Java location and also pass in or use the username of the user you want to delete i place of "wgnmaxgcuj". The above also assumes that you are using Cloudera Manager and parcels.
... View more
08-22-2018
10:48 PM
1 Kudo
@sbpothineni, You can find more information regarding the Cluster Utlization Report here: https://www.cloudera.com/documentation/enterprise/latest/topics/admin_cluster_util_report.html and about Reports Manager reports here: https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cm_dg_reports.html The Cluster Utilization report shows metrics regarding YARN and Impala jobs/queries The Reports from Reports Manager show information about HDFS
... View more
08-22-2018
10:30 PM
@DanielWhite, Also, if a YARN job has launched, you can review the progress of the containers like any YARN job. They may provide some insight into what is slow. To help define where the problem is occurring, you can try creating a new replication schedule to replicate files to/from the same cluster. This will help you see if the problem involves the other cluster.
... View more
08-22-2018
10:17 PM
@vijithv, Hard to say, but the timeout indicates that the client could not reach the KDC via UDP from that host. Could be firewall, DNS, etc. UDP has packet size restrictions that often don't permit Active Directory tickets to be issued. Generally, the KDC will tell the client and the client will try TCP, but it seems on your one host that a connection to the KDC cannot even be made. Firewall rules are certainly suspect but a number of things could cause this. Using TCP always is fine.
... View more
08-22-2018
04:57 PM
@DanielWhite, There is a Cloudera Manager command and a YARN job. - To Abort the command, you can click on the Command Details button under the replication schedule with a running command. - If the YARN job continues to run, you can kill it like any other YARN job. Aborting/Killing is safe. The next time you run replication, any files that were already copied will be skipped; any that weren't copied will be copied.
... View more
08-22-2018
04:43 PM
1 Kudo
@AKB, Find what IP address you can use to access a DataNode host. Map that to the hostname of the host that is returned by "hostname -f" on that host. Since the NameNode returns the DataNode hostname, you need to be sure your edge host can resolve that hostname to an IP that is reachable by your client.
... View more
08-22-2018
04:35 PM
@AKB, Looks like your client cannot access DataNodes to write out blocks. You could edit your /etc/hosts file on the client to map your private hostnames to the public hostnames (for all hosts in the cluster). That might work. There may be a more elegant solution, but that should get you by if you can resolve the IPs ok.
... View more