Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Error while cluster setup in installation.

avatar
Expert Contributor

I am stuck in the middle of Cluster setup after installing Cloudera manager. The first 3 steps have completed however when it comes to starting HDFS it has successfully formatted the name directories of the current NameNode but got stuck in Start HDFS, please find below the error.

 

Cluster Setup
First Run Command
Status Running Aug 15, 9:16:45 AM
There was an error when communicating with the server. See the log file for more information.

Completed 3 of 8 step(s).
Show All Steps Show Only Failed Steps Show Running Steps
Ensuring that the expected software releases are installed on hosts.
Aug 15, 9:16:45 AM 90ms
Deploying Client Configuration
Cluster 1
Aug 15, 9:16:45 AM 16.13s
Start Cloudera Management Service, ZooKeeper
Aug 15, 9:17:01 AM 27.87s
Start HDFS
0/1 steps completed.
Aug 15, 9:17:29 AM
Execute 3 steps in sequence
Waiting for command (Start (77)) to finish
Aug 15, 9:17:29 AM
Formatting the name directories of the current NameNode. If the name directories are not empty, this is expected to fail.
NameNode (namenode1)
Aug 15, 9:17:29 AM 14.86s
Start HDFS
There was an error when communicating with the server. See the log file for more information.

 

I am unable to check the logs as the cluster is not fully setup, please suggest what could be the reason and how to fix these. I am installing with version 5.16.2

3 REPLIES 3

avatar
Master Guru

@HanzalaShaikh Please check the below location to see the logs form host's terminal to debug. 

1. /var/log/hadoop-hdfs/   (For HDFS Role Log File)
2. /var/run/cloudera-scm-agent/process/xxxx-hdfs-DATANODE/logs/ (for stderr and stdout log files)

This will help you to see the issue.


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Expert Contributor

@GangWar thanks for your quick reply. As per your suggestion I have checked the logs on both the locations on namenode and datanodes. On one of the data nodes I have checked the logs in /var/run/cloudera-scm-agent/process/28-hdfs-DATANODE/logs, I have found below results in the log when searched by the keyword Error:

 

++ replace_pid -Xms521142272 -Xmx521142272 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/hdfs_hdfs-DATANODE-111b6db5e742dbffe061f0c1d6bc8878_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh
++ sed 's#{{PID}}#5409#g'

 

However, I don't see any error message in /var/log/hadoop-hdfs, please also suggest which log file to check to debug. 

audit hadoop-cmf-hdfs-NAMENODE-namenode1.us-east1-b.c.coherent-elf-271314.internal.log.out hdfs-audit.log SecurityAuth-hdfs.audit stacks

avatar
Master Guru

@HanzalaShaikh You can see the audit hadoop-cmf-hdfs-NAMENODE-namenode1.us-east1-b.c.coherent-elf-271314.internal.log.out file. 

 

Try to clean start the HDFS again and see if that helps. 


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.