Member since
07-04-2016
40
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5412 | 09-05-2016 12:05 PM | |
2040 | 09-05-2016 12:03 PM |
11-17-2016
08:22 AM
@Mugdha wow... that fixed everything. Can't believe I didn't clue into that. THANK YOU!
... View more
11-16-2016
08:45 AM
Hi there, I have a 6-node HDP cluster, and am trying to install and use the HTTPFS server. I have enabled NameNode HA on my cluster. I followed this article by @David Streever: HTTPFS - Configure and Run with HDP The error I'm getting is the following. I have played around with this a ton, and this is actually the third time I've tried installing it on a fresh node. I am hoping someone can help, as I am stumped. [root@xxxxxxxxxxxx sbin]# ./httpfs.sh run
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 32: /usr/hdp/current/hadoop/libexec/httpfs-config.sh: No such file or directory
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 37: print: command not found
/usr/hdp/current/hadoop-httpfs/sbin/httpfs.sh.distro: line 50: print: command not found
Using CATALINA_BASE: /usr/hdp/current/hadoop-httpfs
Using CATALINA_HOME: /etc/hadoop-httpfs/tomcat-deployment
Using CATALINA_TMPDIR: /usr/hdp/current/hadoop-httpfs/temp
Using JRE_HOME: /usr
Using CLASSPATH: /etc/hadoop-httpfs/tomcat-deployment/bin/bootstrap.jar
Nov 16, 2016 8:29:31 AM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server} Setting property 'port' to '' did not find a matching property.
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Nov 16, 2016 8:29:31 AM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-0
Nov 16, 2016 8:29:31 AM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 511 ms
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Nov 16, 2016 8:29:31 AM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.44
Nov 16, 2016 8:29:31 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory webhdfs
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Nov 16, 2016 8:29:32 AM org.apache.catalina.core.StandardContext start
SEVERE: Error listenerStart
Nov 16, 2016 8:29:32 AM org.apache.catalina.core.StandardContext start
SEVERE: Context [/webhdfs] startup failed due to previous errors
Nov 16, 2016 8:29:32 AM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Nov 16, 2016 8:29:32 AM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-0
Nov 16, 2016 8:29:32 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 737 ms
After seeing this, I thought the first problem seemed to be "Tomcat Native library... was not found ...". So I ran yum install tomcat, which did install things. But then I tried to start httpfs and got all of the same errors... So I may have to undo that now. I'm hoping that I'm just missing something simple. Any help is much appreciated
... View more
Labels:
- Labels:
-
Apache Hadoop
09-12-2016
12:31 PM
It caused such a disaster the last time I don't want to mess up anything more than it already is... but maybe that's the best option at this point. Thank you for your advice and help @Artem Ervits, I really appreciate it.
... View more
09-12-2016
12:23 PM
Oops, thank you! I can repost them with x's instead if you want. I've discovered that part of the issue was actually a disk space issue. Thinks are much cleaner now that that's been solved, but my secondary namenode process still says it can't connect.
... View more
09-09-2016
08:57 AM
Hi there, So all of my hosts and services were working. I then tried to Enable NameNode HA, but the wizard failed and so I rolled back using this Here. I'm guessing troubleshooting these errors individually will not get me far, as I'm sure it is not a coincidence that they all happened after this rollback, and only on the secondary NameNode. The other services run and aren't crashing on this host (storm, falcon, yarn, flume, hive, etc.) Is there a way to troubleshoot the SNameNode a bit? I'm very new to hadoop and ambari. Service problems are as follows, and these issues reoccur exactly the same after clean-up, agent restarts, server restarts, etc.: Oozie Server [Errno 111] Connection refused startup succeeds DataNode [Errno 111] Connection refused was up for about 10 minutes. startup fails now RegionServer [Errno 111] Connection refused. startup succeeds, crashed later. Accumulo TServer [Errno 111] Connection refused startup succeeds, but then it fails a few seconds later
... View more
Labels:
09-06-2016
09:24 AM
@lraheja Oh, silly me! That fixed everything and now my namenode is working!!!! Thank you sooo much for your help.
... View more
09-06-2016
07:53 AM
Hi @lraheja, thanks for your response. I ran the command you suggested, and the result is the same error. I had restarted all of my instances, and stopped all services again. Is there something else I could do in addition to this? It also prints out the following attempt ten times which is shown in the stdout for the operation, maybe that's the issue? I turned safemode off using the same command used to turn it off but with "leave". 2016-09-06 07:35:08,376 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://abc.xyz.com -safemode get | grep 'Safe mode is OFF'' returned 1.
And here is the error again: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 212, in namenode
create_hdfs_directories(is_active_namenode_cmd)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 278, in create_hdfs_directories
only_if=check
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 463, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 460, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 246, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 135, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 167, in get_property_for_active_namenode
if INADDR_ANY in value and rpc_key in hdfs_site:
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.http-address' was not found in configurations dictionary!
... View more
09-05-2016
12:40 PM
Hi there, I have a fresh installation of HDP 2.3.4 on a 5-node cluster. All of my services were running successfully, with statistics displayed in the widgets. I have not have any NameNode issues up til today. Earlier today I started the "Enable NameNode HA" Wizard. It failed at the first step in the installation phase (I think it was the namenode) and retrying didn't work, but I wasn't able to move forward or back in the process so I left and followed https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_how_to_roll_back_namenode_ha.html. At the end of completing the entire guide (and I've now gone back and done the whole thing over in case I missed something), I started HDFS (step 1.2.13) and the operation failed for the NameNode. I have no idea what to do! Does anyone recognize this error? Here is the output: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 530, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 212, in namenode
create_hdfs_directories(is_active_namenode_cmd)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 278, in create_hdfs_directories
only_if=check
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 463, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 460, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 246, in action_delayed
main_resource.resource.security_enabled, main_resource.resource.logoutput)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 133, in __init__
security_enabled, run_user)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/namenode_ha_utils.py", line 167, in get_property_for_active_namenode
if INADDR_ANY in value and rpc_key in hdfs_site:
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.https-address' was not found in configurations dictionary!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
09-05-2016
12:05 PM
I ended up just removing the TServer service from the nodes that were failing. Not really a solution, but the others ones still work fine. Thanks for your help!
... View more
09-05-2016
12:03 PM
It was a proxy issue, I restarted everything after configuring the proxy properly and the widgets worked again.
... View more