Support Questions
Find answers, ask questions, and share your expertise

datanode not starting

Highlighted

datanode not starting

New Contributor

Hi Folks,

I am getting following error while starting datanode, I tried following troubleshooting methods:

1. Checked clusterid of namenode and datanode. Both are same.

2. Tried by deleting all datanode local directories and reformatted namenode.

But no luck.

HDP version --> HDP-2.6.4.0

OS --> Ubuntu

i can see this id BP-567834271-10.0.3.28-1523521056374 on all datanodes current directory.

2018-04-12 12:35:53,362 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(200)) - Caught exception while adding replicas from /mnt/hbase/data2/current. Will throw later.
java.io.IOException: block pool BP-567834271-10.0.3.28-1523521056374 is not found
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getBlockPoolSlice(FsVolumeImpl.java:368)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:813)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:192)
2018-04-12 12:35:53,363 FATAL datanode.DataNode (BPServiceActor.java:run(814)) - Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to xxxx-workbench4/x.x.x.x:8020. Exiting. 
java.io.IOException: block pool BP-567834271-10.0.3.28-1523521056374 is not found
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getBlockPoolSlice(FsVolumeImpl.java:368)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:813)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:192)
2018-04-12 12:35:53,363 WARN  datanode.DataNode (BPServiceActor.java:run(835)) - Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to xxxx-workbench4/x.x.x.x:8020
2018-04-12 12:35:53,466 INFO  datanode.DataNode (BlockPoolManager.java:remove(103)) - Removed Block pool <registering> (Datanode Uuid unassigned)
2018-04-12 12:35:55,466 WARN  datanode.DataNode (DataNode.java:secureMain(2499)) - Exiting Datanode
2018-04-12 12:35:55,467 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 0
2018-04-12 12:35:55,469 INFO  datanode.DataNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at xx-workbench1/x.x.x.x
5 REPLIES 5
Highlighted

Re: datanode not starting

Mentor

@Mahesh Sankaran

Has this cluster worked before or it's a fresh install? HA or not?

Is the name node running?

Can you try starting the DataNode manually?

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"

If your NameNode is down also down that manually

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" 

Please revert

Highlighted

Re: datanode not starting

New Contributor

@Geoffrey Shelton Okot

It is a fresh install, Namenode is running its not HA.

I tried to start manually but no luck.

Highlighted

Re: datanode not starting

Mentor

@Mahesh Sankaran

Can you remove or move this DataNode mount point make sure the equivalent on the NameNode is also removed?

/mnt/hbase/data2/current
For example,
/grid1/hbase/data2/current

save the new config a start

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"

Please revert

Re: datanode not starting

Rising Star

I have met the similar problem. I have move the block pool to another place, but the problem occurs as well.

How to fix it?

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"

output following message:

Java Hotshpot 64bit server VM warning: cannot open file /var/log/hadoop/hdfs/gc.log-2019... due to No such file or directory

Highlighted

Re: datanode not starting

Mentor

@Mahesh Sankaran

Any updates?