Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/feat

PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/feat

New Contributor

Hi all,

I am facing below issue since few days and I am not able to resove the issue.

Background :

1. I have installl 3 Node Cloudera Hadoop Cluster on EC2 Instance which is workin as expected.

2. Client Program on my windows machine to load data from my machine to HDFS.

Details :

My client program has developed in Java which reads data from the windows local disk and write it to HDFS. When I am trying to run my programs It is givin me below error.

PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

6:32:45.711 PM     INFO     org.apache.hadoop.ipc.Server     

IPC Server handler 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 108.161.91.186:54097: error: java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1331)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:480)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)

Tried below solutions :

1. All the ports are open in my network as well as on EC2 Instances.

2. Try to setup proxy socks but I don't know it did not worked. Even I dont have much idea I should do this or not.

Please provide me suggestion or solution to resolve this issue.

Thanks in advance.

Dharmesh

12 REPLIES 12

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

The exception could occur if ports are not open between the client host and the datanode, which are not open by default on Amazon EC2.

Were the ports opened? Please refer

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html

  

 

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

New Contributor

I tallked with my system admin and checked we have all the ports are open. and I am also able to see namenode page through my web browser using http://namenode:50070 url. I think that conforms ports are open right?

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

1. Are any hosts excluded with the dfs.hosts.exclude setting in hdfs-site.xml or hdfs-default.xml?

If so, don't exclude any hosts.

2. Are the datanodes listed in conf/slaves?

if not, list all datanode hostnames or IP addresses in your conf/slaves file, one per line.

3. Does the core-site.xml on datanodes have the fs.defaultFS set to the namenode URI?

4. Is enough disk free space available on datanodes?

5. Is the dfs.blocksize set to a non-negative value?

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

New Contributor

I have checked all the parameters that you have mentioned below and it looks good.

 

Does it has  anything to do with SSH Tunelling?

 

Dharmesh

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

The default tmp directory is not recommended. Is the hadoop.tmp.dir set in core-default.xml?

 

<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hdfs/tmp</value>
</property>

 

Also, are the dfs.datanode.data.dir set in hdfs-site.xml? The default value for dfs.datanode.data.dir includes the hadoop.tmp.dir, which is not recommended.

 

<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hdfs/data</value>
</property>

 

 

Highlighted

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

The default tmp directory has username included, which is not recommended.

 

1. Shutdown hadoop services.

 

2. Set in hdfs-site.xml on each datanode.
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn</value>
</property>

 

3. Set in core-site.xml on each node
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop</value>
</property>


4. Make the directories and set permissions.

sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop
sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop


sudo mkdir -p /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
sudo chown -R hdfs:hdfs /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
sudo chmod 700 /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn

5.
Format namenode
sudo -u hdfs hadoop namenode -format
Select Y for
Re-format filesystem in /data/namedir ? (Y or N)

 

6.
Start hadoop services.

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

Also refer to the namenode and datanode logs.

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

New Contributor

Here are few of the path for the properties :

 

DataNode Data Directory :

 

dfs.data.dir = "/mnt/dfs/dn"

 

DataNode Log Directory
hadoop.log.dir =" /var/log/hadoop-hdfs"

 

NameNode Data Directories
dfs.name.dir, dfs.namenode.name.dir =" /mnt/dfs/nn"

 

NameNode Log Directory
hadoop.log.dir = "/var/log/hadoop-hdfs"

 

Please review and let me know if you see anything wrong,

 

Dharmesh

Re: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/

Rising Star

1.dfs.name.dir, dfs.namenode.name.dir =" /mnt/dfs/nn"
Set only one of these.

 

2. Also set the hadoop.tmp.dir.

 

3. Also, the namenode and jobtracker should not be collocated with the datanode. Make one of the nodes for only jobtracker and namenode and two nodes for datanodes and tasktrackers.

 

Stop services and reformat namenode and start hadoop services as in an earlier post.

 

1. Shutdown hadoop services.

2. Set in hdfs-site.xml on each datanode.
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn</value>
</property>

3. Set in core-site.xml on each node
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop</value>
</property>


4. Make the directories and set permissions.
sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop
sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop

sudo mkdir -p /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
sudo chown -R hdfs:hdfs /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
sudo chmod 700 /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
5.
Format namenode
sudo -u hdfs hadoop namenode -format
Select Y for
Re-format filesystem in /data/namedir ? (Y or N)

6.
Start hadoop services.

Don't have an account?
Coming from Hortonworks? Activate your account here