Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Issue with Log4j.properties file on HDFS starting

avatar
New Contributor

Hi Folks,

I'm facing an issue with HDFS starting.

I installed CDH5 with 3 servers. I put 2 of them as Datanodes.

When i try to launch HDFS service on datanode servers i got the error bellow (stderr):

 

+ '[' -e /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/topology.py ']'
+ '[' -e /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/log4j.properties ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/317-hdfs-DATANODE#g' /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE/log4j.properties
++ find /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE -maxdepth 1 -name '*.py'
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = datanode ']'
+ '[' file-operation = datanode ']'
+ '[' bootstrap = datanode ']'
+ '[' failover = datanode ']'
+ '[' transition-to-active = datanode ']'
+ '[' initializeSharedEdits = datanode ']'
+ '[' initialize-znode = datanode ']'
+ '[' format-namenode = datanode ']'
+ '[' monitor-decommission = datanode ']'
+ '[' jnSyncWait = datanode ']'
+ '[' nnRpcWait = datanode ']'
+ '[' monitor-upgrade = datanode ']'
+ '[' finalize-upgrade = datanode ']'
+ '[' mkdir = datanode ']'
+ '[' nfs3 = datanode ']'
+ '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/317-hdfs-DATANODE datanode
log4j:ERROR Could not find value for key log4j.appender.EventCounter
log4j:ERROR Could not instantiate appender named "EventCounter".

I check the parcels folders of HDFS to see if EvenCounter as well configurer and it does...

Do you know how to fix that ?

 

Edit: http://pastebin.com/jKwY1JXv

 

BR

 

NicolasY.

 

1 ACCEPTED SOLUTION

avatar
New Contributor
Seems to be caused by the server name. I changed it and node are working normally.

BR

NicolasY.

View solution in original post

6 REPLIES 6

avatar
Mentor
This error is harmless and should not be causing your DataNode any issues. Can you check your DN logs instead for an actual error, if it is not starting up?

avatar
New Contributor

Hi Hars,

 

Thanks for your support.

 

Please find the today's log from one of both DN concerned. (Log DN)

 

I can see an FATAL message about BindException : Cannot Assigned requested address.

 

Thanks again.

 

BR

 

NicolasY

avatar
New Contributor
Seems to be caused by the server name. I changed it and node are working normally.

BR

NicolasY.

avatar
Mentor
Thanks for following up with your found solution Nicolas! Indeed a bind(…) syscall would fail if the resolved address asked of it is not found among local interfaces.

avatar
New Contributor

I have the same problem! Where did you change the name of the server? Can you please give more details?

avatar
New Contributor

I am having the same problem! Where did you change the name of the server exactly? Can you please give more details?

 

Thanks in advance