Member since
01-18-2016
32
Posts
8
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2866 | 05-11-2016 04:30 AM | |
1355 | 04-12-2016 09:10 PM |
05-11-2016
04:30 AM
1 Kudo
@Rich Raposa Sure, thank you. Can you review the exam objectives section. 'Candidates' is missing the letter 'C'.
... View more
05-03-2016
05:53 AM
3 Kudos
There are multiple webpages available which contains information about HDPCD exam, some of those are outdated and needs to be removed/have link to the updated exam details. The most updated page is the following I believe. Please add a notice to the following page saying, the information contained in this page is outdated and point to the updated location. There is also some information available in this page, but the link is not working and needs to be updated as well. Thanks!
... View more
Labels:
04-14-2016
06:22 AM
@Sridhar Bandaru It would be a good idea to create a backup of the Ambari database before proceeding. http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_reference_guide/content/_back_up_current_data.html Please check the response you are getting when you run the below command. # curl -u admin:admin -i -H 'X-Requested-By: ambari' -X GET http://sandbox.hortonworks.com:8080/api/v1/clusters/Sandbox/services/FALCON
... View more
04-13-2016
04:53 AM
@saurabh saurabh When you click on ResourceManager UI, navigate to specific application's link. On the application page, you should see links to all your individual application attempts. If you review the logs for individual task attempts, you should see it displays the log of both the standard output as well as the standard error.
... View more
04-12-2016
09:10 PM
@saurabh saurabh You can review job logs from ResourceManager UI. Ambari > YARN > Quick Links > ResourceManager UI
... View more
04-11-2016
05:34 PM
@Abiel Flrs Check the OS level user/group mapping as well. # id hue
# grep hue /etc/passwd
# grep hue /etc/group
... View more
04-11-2016
04:43 PM
@Amit Sharma 1. Confirm from Ambari that the DataNode process is down. 2. Ensure there are no stale DataNode processes running. # ps -ef | grep datanode | grep -v grep 3. Check that no other program/service is listening on port 50010. # netstat -anp | grep '0.0.0.0:50010' 4. Check that there are no leftover PIDs remaining. Remove the PID file if it exists. # cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid 5. While starting the DataNode from Ambari, tail the log file to review startup messages and related errors/warnings. # tailf /var/log/hadoop/hdfs/hadoop-hdfs-datanode-[hostname].log
... View more