Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Serve Namenode UI to localhost only.

avatar
New Contributor

We have a need to serve our Namenode UI through a proxy and prevent direct access to the UI.   There seems to be 2 ways we could do that but I'm having difficulty in implementing.

1.  Set dfs.namenode.http-address to 127.0.0.1 :9870   Unfortunately, it appears that cloudera's implementation does not allow an address in dfs.namenode.http-address, but instead only accept a port number.  This is then combined with the namenode hostname to build the http address.   I attempted to override via a safety valve configuration but that appears to be intercepted as well.

2.   Configure the jetty-web.xml with an IPAccessHandler that only allows connections from the local server.   I'm having difficulty finding where in the filesystem I'd set this so that cdh6 will pick this up for the namenode role.

This is  Cloudera Enterprise 6.3.3 on Centos 7



1 ACCEPTED SOLUTION

avatar
Master Collaborator

Hi @DanHosier,

 

Just provide you a possible solution to bind the namenode http to localhost.

Add following property to service side advanced hdfs-site.xml and restart hdfs.

HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
<property>
<name>dfs.namenode.http-bind-host</name>
<value>127.0.0.1</value>
</property>

Then the property is added into /var/run/cloudera-scm-agent/process/<Latest process of NN>/hdfs-site.xml:

# grep -C2 "dfs.namenode.http-bind-host" hdfs-site.xml
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>127.0.0.1</value>
</property>

And then test curl commands:

# curl `hostname -f`:9870
curl: (7) Failed connect to xxxx.xxxx.xxxx.com:9870; Connection refused
# curl localhost:9870
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="0;url=dfshealth.html" />
<title>Hadoop Administration</title>
</head>
</html>

Now the webUI only served on NN's localhost.

But you will see this alert on CM because Service Monitor cannot reach to NN WebUI:
NameNode summary: xxxx.xxxx.xxxx.com (Availability: Unknown, Health: Bad). This health test is bad because the Service Monitor did not find an active NameNode.

So this solution has side effects for service monitor, but actually hdfs is running well. 

 

Regards,

Will

If the answer helps, please accept as solution and click thumbs up.

View solution in original post

3 REPLIES 3

avatar
Master Collaborator

Hi @DanHosier,

 

Just provide you a possible solution to bind the namenode http to localhost.

Add following property to service side advanced hdfs-site.xml and restart hdfs.

HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
<property>
<name>dfs.namenode.http-bind-host</name>
<value>127.0.0.1</value>
</property>

Then the property is added into /var/run/cloudera-scm-agent/process/<Latest process of NN>/hdfs-site.xml:

# grep -C2 "dfs.namenode.http-bind-host" hdfs-site.xml
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>127.0.0.1</value>
</property>

And then test curl commands:

# curl `hostname -f`:9870
curl: (7) Failed connect to xxxx.xxxx.xxxx.com:9870; Connection refused
# curl localhost:9870
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="0;url=dfshealth.html" />
<title>Hadoop Administration</title>
</head>
</html>

Now the webUI only served on NN's localhost.

But you will see this alert on CM because Service Monitor cannot reach to NN WebUI:
NameNode summary: xxxx.xxxx.xxxx.com (Availability: Unknown, Health: Bad). This health test is bad because the Service Monitor did not find an active NameNode.

So this solution has side effects for service monitor, but actually hdfs is running well. 

 

Regards,

Will

If the answer helps, please accept as solution and click thumbs up.

avatar
New Contributor

That sounds like a good first step.   Is there a way to configure the service monitor to use a different port (port of a proxy)?

avatar
New Contributor

Actually, this should work fine regardless.    We'll monitor what NameNodeHealth would monitor ourselves and just suppress that health test.


Thanks Will!