Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

data node cant start because the root user

avatar
Rising Star

hi,everyone:

my data store azure storage,but azure data user is root. cant change hdfs. this cause my datanode cant start。

Have other method Resolve this confilcts?

the error is:

2016-08-16 02:13:05,270 INFO datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT] 2016-08-16 02:13:07,256 WARN datanode.DataNode (DataNode.java:checkStorageLocations(2439)) - Invalid dfs.datanode.data.dir /mnt/data : EPERM: Operation not permitted at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:727) at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2394) at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2436) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2418) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562) 2016-08-16 02:13:07,267 ERROR datanode.DataNode (DataNode.java:secureMain(2545)) - Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/mnt/data/" at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2445) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2418) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562) 2016-08-16 02:13:07,269 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2016-08-16 02:13:07,278 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:

1 ACCEPTED SOLUTION

avatar

@pan bocun check if you have correct permissions and ownership to this directory : /mnt

hdfs:hadoop ownership and 755 permissions.

View solution in original post

4 REPLIES 4

avatar
Super Collaborator

Hi @pan bocun

Error looks like: dfs.datanode.data.dir are invalid: "/mnt/data/" at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2445)..

Can you please give the valid entry dfs.datanode.data.dir property in HDFS config(hdfs-site.xml)

avatar

@pan bocun check if you have correct permissions and ownership to this directory : /mnt

hdfs:hadoop ownership and 755 permissions.

avatar
Rising Star

thanks,InAmbari the data node dri permission default is775,but my folder is 777

so i change 775 to 777 use ambari

my problem solved

avatar
Contributor

This link might be helpful https://community.hortonworks.com/questions/1635/instructions-to-setup-wasb-as-storage-for-hdp-on-a....

seems the below properties need to be verified on the data node which is failing:

The following is a list of configurations that should be modified to configure WASB:

  • fs.defaultFS
  1. wasb://<containername>@<accountname>.blob.core.windows.net
  • fs.AbstractFileSystem.wasb.impl
  1. org.apache.hadoop.fs.azure.Wasb
  1. <storage_access_key>
  • Even though WASB will be set as the fs.defaultFS, you still need to define DataNode directories for HDFS. As the intent here is to use WASB as the primary FS, you can set the HDFS datanode directories to the temporary /mnt/resource mount point that is provided with Azure compute servers if you only plan to use HDFS for temporary job files. DataNode Directories
  1. /mnt/resource/Hadoop/hdfs/data