Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 789 | 06-04-2025 11:36 PM | |
| 1370 | 03-23-2025 05:23 AM | |
| 681 | 03-17-2025 10:18 AM | |
| 2465 | 03-05-2025 01:34 PM | |
| 1607 | 03-03-2025 01:09 PM |
08-20-2017
12:30 PM
@Anup Shirolkar Good progress can you paste here the screenshot? Now that the first problem was solved I would advice you accept my answer and open a new thread with the YARN UI issue otherwise this thread will be too long to follow. Thanks
... View more
08-20-2017
11:58 AM
@Kishore Kumar I am happy you can smile and progress with your project! Can you accept my answer and its advisable to open a new thread for the Smart Sense view. If you are using admin as the login for Smartsense view make sure you have done the following Add these two property settings in core-site.xml. You can find that in the Ambari HDFS config section. hadoop.proxyuser.admin.hosts=*
hadoop.proxyuser.admin.groups=* As root user # su - hdfs Create the admin user directory in hdfs $hdfs dfs -mkdir /user/admin Create permissions admin user directory in hdfs $hdfs dfs -chown admin:hdfs Please revert
... View more
08-20-2017
09:51 AM
@Kishore Kumar Add these two property settings in core-site.xml. You can find that in the Ambari HDFS config section. hadoop.proxyuser.hdp.hosts=*
hadoop.proxyuser.hdp.groups=* As root user # su - hdfs Create the hdp user directory in hdfs $hdfs dfs -mkdir /user/hdp Create permissions hdp user directory in hdfs $hdfs dfs -chown hdp:hdfs For your information HDFS is a distributed File system so needless to say once created its accessible form all the cluster hosts using hdfs user !
... View more
08-20-2017
09:41 AM
@Amithesh Merugu It can be local or you can upload to hdfs but to do that you need maybe to create your home directory in /user As root switch to hdfs user # su - hdfs check existing directories $ hdfs dfs -ls / Make a home directory for your user (toto) $ hdfs dfs -mkdir /user/toto Change ownership $ hdfs dfs -chown toto:hdfs /user/toto Copy your jar to hdfs imagining the jars are in your local home directory /home/toto/test.jar As hdfs user while in your $ hdfs dfs -CopyFromLocal test.jar /user/toto Now you can execute it from hdfs by passing the paths to the input and output directories in HDFS. Hope that helps
... View more
08-20-2017
07:19 AM
@Anup Shirolkar Your /etc/hosts entry looks wrong is should look like this you advised never change the first 2 lines for IPV4 and IPV6 127.0.0.1 localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.4 hdp25-node1.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net
10.0.0.5 hdp25-node2.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net
10.0.0.6 hdp25-node3.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net Can you change that on all the hosts and retry that could be the issue with connection lost ! Why does you repolist output have exclamation marks !!!!!?? !HDP-2.4
!HDP-UTILS-1.1.0.20
!Updates-ambari-2.2.2.0 Can you copy /paste the contents of the below files cat /etc/yum.repos.d/ambari.repo
cat /etc/yum.repos.d/hdp.repo
... View more
08-19-2017
08:49 PM
@Anup Shirolkar Ambari server log 2017-08-18 11:17:39,720 [CRITICAL] [HIVE] [hive_server_process] (HiveServer2 Process) Connection failed on host hdp25-node2.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net:10000 (Traceback (most recent call last): Ensure ambari agent is running and the port is is free (Ambari Agent Heartbeat) hdp25-node1.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net copy the Ambari,HDP* .repo to /etc/yum.repos.d/ to all other hosts Confirm the repos are accessible by # yum repolist You should see something like this HDP-2.3.2.0 | 2.9 kB 00:00
HDP-UTILS-1.1.0.20 | 2.9 kB 00:00
Updates-ambari-2.1.2.1 | 2.9 kB 00:00 Check the ambari-agents on these nodes are running if not restart them ensure the value hostname points to your ambari server in the /etc/ambari-agent/conf/ambari-agent.ini [server]
hostname={your-ambari-server}
url_port=8440
secured_url_port=8441
hdp25-node1.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net is not sending heartbeats
hdp25-node2.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net is not sending heartbeats
hdp25-node3.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net is not sending heartbeats Error Caused by: org.apache.ambari.server.HostNotFoundException: Host not found, hostname= Double check your DNS. $hostname -f The output should be FQDN I see a lot of connection refused in the log can you ensure the ambari server can access the other hosts in the cluster
... View more
08-19-2017
07:57 PM
@Kishore Kumar Stop hdfs and change the below parameters NameNode /var/hadoop/hdfs/namenode DataNode /opt/mount1/hdp/hadoop/hdfs/data,/opt/mount2/hadoop/hdfs/data Restart HDFS To see the user directory while logged on as root # su - hdfs
$ hdfs dfs -ls /user You should be able to see /user/hdp the above command should work! Let me know
... View more
08-19-2017
11:28 AM
@Anup Shirolkar There are a couple of things I would like you to clarify check the host entry on all the servers they should be identical cat /etc/hosts Your yum repos, these 2 files should be point to either public or internal repo and these files should be available on all the nodes in the cluster cat /etc/yum.repos.d/ambari.repo
cat /etc/yum.repos.d/hdp.repo Make sure that the firewall is disabled on all the hosts Passwordless connect is working ! Zip and upload here your ambari-server logs found in cat /var/log/ambari-server/*
... View more
08-19-2017
09:12 AM
@Anup Shirolkar Can you copy the stuck URL of step 3 http://xxxxx:8080/step3 and open a new window paste it and change the number to 4 , it will continue. http://xxxxx:8080/step4 and hit continue Please let me now
... View more
08-19-2017
04:10 AM
@Kishore Kumar Can you briefly descript the setup of your cluster? Does the ambari server have the correct IP/HOST entries in the /etc/hosts Have you disabled firewall between the cluster nodes? How is your DNS entry in /etc/resolv.conf Does the user hdp have a home directory in hdfs ,trun below command as user hdfs $ hdfs dfs -ls /user Please revert
... View more