- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Hadoop datanode jmx metrics pages
- Labels:
-
Apache Hadoop
Created ‎04-03-2017 01:44 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Is jmx info pages of datanodes deprecated?
I remember like there used to be pages on http:<datanode>:50070/
I also see similar tips on this page for datanode (50075):
https://www.datadoghq.com/blog/collecting-hadoop-metrics/
On HDP 2.5.3 I can't see them. Do I remember wrong?
Is it something I can enable-disable?
Created ‎04-03-2017 05:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
These pages are still present. You can navigate to the jmx servlet on the DataNode web UI.
Created ‎04-03-2017 05:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
These pages are still present. You can navigate to the jmx servlet on the DataNode web UI.
Created ‎04-03-2017 06:45 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Arpit Agarwal,
Thank you for your answer.
Actually, you are right. I checked that by a friend's installation and page is there.
Unfortunately that URL is not available in my environment.
This site can’t be reached
10.0.109.23 refused to connect.
- Search Google for 109 50075
ERR_CONNECTION_REFUSED
I realised an anormal situation on my environment.
I got two datanode instances running (Dproc_datanode) one started by root and other by hdfs.
It should be hdfs so I killed the one started by root. Expected to see the page but it wasn't there. I checked the service on Ambari but it seems like Ambari manages the process started by root : It was stopped.
I need to investigate more what is going on 😕
Created ‎04-03-2017 11:14 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You likely have Kerberos enabled. The DataNode process starts as root so it can bind a privileged port (<1024) for data transfer. Then it launches another process as user hdfs. You should not kill either process.
The "refused to connect" error looks like some network connectivity issue in your environment, or you are hitting the wrong port number. See if you can find the correct info port from either configuration or from the DataNodes tab of the NameNode web UI.
Created ‎04-04-2017 07:43 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Answer appeared on second comment.
Thanks @Arpit Agarwal !
By the way, port number is 1022
On /etc/hadoop/conf/hdfs-site.xml file
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1022</value>
</property>
Yes, my environment is Kerberos enabled.
It was already represented on Namenode UI > Datanodes tab > "Http Address" column.
