Reply
New Contributor
Posts: 3
Registered: ‎04-29-2014
Accepted Solution

Issue with Log4j.properties file on HDFS starting

[ Edited ]

Hi Folks,

I'm facing an issue with HDFS starting.

I installed CDH5 with 3 servers. I put 2 of them as Datanodes.

When i try to launch HDFS service on datanode servers i got the error bellow (stderr):

 

+ '[' -e /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/topology.py ']'
+ '[' -e /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/log4j.properties ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/317-hdfs-DATANODE#g' /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/log4j.properties
++ find /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE -maxdepth 1 -name '*.py'
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = datanode ']'
+ '[' file-operation = datanode ']'
+ '[' bootstrap = datanode ']'
+ '[' failover = datanode ']'
+ '[' transition-to-active = datanode ']'
+ '[' initializeSharedEdits = datanode ']'
+ '[' initialize-znode = datanode ']'
+ '[' format-namenode = datanode ']'
+ '[' monitor-decommission = datanode ']'
+ '[' jnSyncWait = datanode ']'
+ '[' nnRpcWait = datanode ']'
+ '[' monitor-upgrade = datanode ']'
+ '[' finalize-upgrade = datanode ']'
+ '[' mkdir = datanode ']'
+ '[' nfs3 = datanode ']'
+ '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE datanode
log4j:ERROR Could not find value for key log4j.appender.EventCounter
log4j:ERROR Could not instantiate appender named "EventCounter".

I check the parcels folders of HDFS to see if EvenCounter as well configurer and it does...

Do you know how to fix that ?

 

Edit: http://pastebin.com/jKwY1JXv

 

BR

 

NicolasY.

 

Posts: 1,491
Kudos: 246
Solutions: 226
Registered: ‎07-31-2013

Re: Issue with Log4j.properties file on HDFS starting

This error is harmless and should not be causing your DataNode any issues. Can you check your DN logs instead for an actual error, if it is not starting up?
Backline Customer Operations Engineer
New Contributor
Posts: 3
Registered: ‎04-29-2014

Re: Issue with Log4j.properties file on HDFS starting

[ Edited ]

Hi Hars,

 

Thanks for your support.

 

Please find the today's log from one of both DN concerned. (Log DN)

 

I can see an FATAL message about BindException : Cannot Assigned requested address.

 

Thanks again.

 

BR

 

NicolasY

New Contributor
Posts: 3
Registered: ‎04-29-2014

Re: Issue with Log4j.properties file on HDFS starting

Seems to be caused by the server name. I changed it and node are working normally.

BR

NicolasY.
Posts: 1,491
Kudos: 246
Solutions: 226
Registered: ‎07-31-2013

Re: Issue with Log4j.properties file on HDFS starting

Thanks for following up with your found solution Nicolas! Indeed a bind(…) syscall would fail if the resolved address asked of it is not found among local interfaces.
Backline Customer Operations Engineer
New Contributor
Posts: 3
Registered: ‎05-07-2017

Re: Issue with Log4j.properties file on HDFS starting

I have the same problem! Where did you change the name of the server? Can you please give more details?

New Contributor
Posts: 3
Registered: ‎05-07-2017

Re: Issue with Log4j.properties file on HDFS starting

I am having the same problem! Where did you change the name of the server exactly? Can you please give more details?

 

Thanks in advance

Announcements