Support Questions

Find answers, ask questions, and share your expertise

problem in moving the input text file from file system to hdfs dfs file system with following comman

avatar
Explorer

1) [cloudera@localhost ~]$ sudo -u hdfs hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: `/home/cloudera/ipf.txt': No such file or directory
2) [cloudera@localhost ~]$ hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x

3) [cloudera@localhost ~]$ sudo -u cloudera hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x

6 ACCEPTED SOLUTIONS

avatar
Explorer
thank u,its working

View solution in original post

avatar
Master Collaborator

@subbu You are doing the same error all the time.

 

In order to write or read from and to the jdfs the user running the command need permissions.

 

You can solve this by always running with the super user "hdfs" and in order to do that you need to add for you command sudo -u hdfs, so your command should be :

 

sudo -u hdfs hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew

 

or as i see all the time you are running with cloudera user so you need to change the owner of / to cloudera or you can change the permissions to the root folder.

 

so:

 

1- use the super user: 

 

sudo -u hdfs hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew

 

2- change root dir to cloudera-scm:

 

sudo -u hdfs  hdfs -chown -R cloudera /

then run:

hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew

 

 

 

3- change the permissions:

 

sudo -u hdfs  hdfs -chmod -R 777   / then run:

 

hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew

 

 

 

View solution in original post

avatar
Explorer

sir while i am trying to exucute this getting error

cloudera@localhost ~]$ sudo -u hdfs hdfs -chown -R cloudera /
Unrecognized option: -chown
Could not create the Java virtual machine.

View solution in original post

avatar
Master Collaborator
sudo -u hdfs hdfs dfs -chown -R cloudera /

View solution in original post

avatar
Explorer

thank u very much sir,its working and i am able run and got an output also

 

View solution in original post

avatar
Explorer
While executing this command I am getting error as
[cloudera@localhost ~]$ hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /op
18/04/22 07:40:40 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

View solution in original post

16 REPLIES 16

avatar
Master Collaborator

1) [cloudera@localhost ~]$ sudo -u hdfs hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: `/home/cloudera/ipf.txt': No such file or directory

 

The file /home/cloudera/ipf.txt doesn't exist in you local host, you can check by ll /home/cloudera/

 

Below you are not using the sudo -u hdfs as you used in the above command.

 

** you faced the same issue in another post.

 

Please use sudo -u hdfs hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
2) [cloudera@localhost ~]$ hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x

3) [cloudera@localhost ~]$ sudo -u cloudera hdfs dfs -put /home/cloudera/ipf.txt /inputnew/
put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x

avatar
Explorer

actually the txt file is already there in the folder and i am issuing the correct command but still i getting error.can u suggest me a solution

 

avatar
Master Collaborator
PLease send me the output of the ll command

avatar
Explorer

put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x

avatar
Super Collaborator

1) User hdfs does not have access to the /home/cloudera directory

2) and 3) is actually the same because in both cases you try to upload the file as user cloudera.

 

You have two options:

1) grant read permissions to hdfs user in /home/cloudera and all sub-contents (directory access require also execute permission)

2) grant write permissions in "/inputnew/" directory in HDFS , to "cloudera" user.

example: sudo -u hdfs hdfs dfs -chown cloudera /inputnew

 

There are multiple ways to grant permissions (e.g. using ACLs), but keep it simple.

avatar
Explorer
how to give permission to cloudera

avatar
Super Collaborator
Use the example I wrote above. It will change the owner of /inputnew directory in hdfs to "cloudera"

sudo -u hdfs hdfs dfs -chown cloudera /inputnew

avatar
Explorer
thank u,its working

avatar
Explorer

hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew

 

error message is:

[cloudera@localhost ~]$ hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew
18/04/18 07:57:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
18/04/18 07:57:25 INFO input.FileInputFormat: Total input paths to process : 1
18/04/18 07:57:25 INFO mapred.JobClient: Running job: job_201804180742_0002
18/04/18 07:57:26 INFO mapred.JobClient: map 0% reduce 0%
18/04/18 07:57:36 INFO mapred.JobClient: Task Id : attempt_201804180742_0002_m_000002_0, Status : FAILED
org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4705)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4687)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4661)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3032)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2996)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2977)