Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDP Cluster Issue : hiveserver2, Datanode and RegiosnServer services keep failing.

avatar

Today suddenly I noticed I am not able to connect to hive view, on further investigation found that my hiveserver2. Datanode and RegionServer services are keep failing. Here is the detail datanode exception datanodelog.txt

Exception: 2017-01-05 16:54:29,697 INFO common.Storage (Storage.java:tryLock(774)) - Lock on /tmp/hadoop-hdfs/dfs/data/in_use.lock acquired by nodename 8335@mpdemo-118-2-1.field.hortonworks.com 2017-01-05 16:54:29,701 WARN common.Storage (DataStorage.java:loadDataStorage(449)) - Failed to add storage directory [DISK]file:/tmp/hadoop-hdfs/dfs/data/ org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hdfs/dfs/data is in an inconsistent state: Can't format the storage directory because the current/ directory is not empty. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:480)

1 ACCEPTED SOLUTION

avatar
Super Guru

It would appear that your DataNode is failing which is the cause of the other services failing.

It also appears that you have not changed the default hdfs-site.xml configuration that controls where DataNodes store their data on the local filesystem. It is not uncommon for operation systems to wipe the /tmp directory (on boot). Perhaps you have experienced this and need to re-format your HDFS?

Change dfs.datanode.data.dir, dfs.namenode.name.dir, and dfs.namenode.checkpoint.dir, then format HDFS

$ hdfs namenode -format

Beware: Formatting HDFS is a destructive operation. Do not perform this operation unless all of the data in HDFS is stored elsewhere or can be generated.

View solution in original post

2 REPLIES 2

avatar
Super Guru

It would appear that your DataNode is failing which is the cause of the other services failing.

It also appears that you have not changed the default hdfs-site.xml configuration that controls where DataNodes store their data on the local filesystem. It is not uncommon for operation systems to wipe the /tmp directory (on boot). Perhaps you have experienced this and need to re-format your HDFS?

Change dfs.datanode.data.dir, dfs.namenode.name.dir, and dfs.namenode.checkpoint.dir, then format HDFS

$ hdfs namenode -format

Beware: Formatting HDFS is a destructive operation. Do not perform this operation unless all of the data in HDFS is stored elsewhere or can be generated.

avatar

@Josh Elser Do I need to perform this on all the nodes? For now I am ok if I loose the data as its test cluster.