Support Questions

Find answers, ask questions, and share your expertise

Permission Error while running spark-shell

avatar
New Contributor

Hello All,

 

While running spark-shell i am getting the below permission error. Can anybody help me out with this? I have just installed cloudera manager with core hadoop + Spark on Centos6.2 with 20GB RAM. CDH 5.8 and hadoop 2.6 version.

 

 

To adjust logging level use sc.setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.0
/_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated.
Type :help for more information.
17/01/15 09:45:53 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergrou p:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvid er.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:242)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider .java:169)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6621)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6603)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6555)

1 ACCEPTED SOLUTION

avatar
Champion
@justin3113 to run jobs across all nodes a user must exist on each node, I'd justin3113 for example. And each user needs a HDFS user directory under /user in HDFS, the user must have read and write access. This is so the job can write temporary data to HDFS from whatever node the job is running. The error is stating that it is trying to create that user directory but only the hdfs user has that permission. Opening up access gets around it but that is not advisable. You should run for each user su - hdfs hdfs dfs -mkdir /user/justin3113.

View solution in original post

4 REPLIES 4

avatar
Master Collaborator

That's the general error you get when you run as user foo, but you haven't set up /user/foo in HDFS, and the usual way that is done is through Hue or syncing with something like Active Directory.

avatar
New Contributor

Do you mean /usr/root? 

I was able to overcome the issue by below commands.

 

su - hdfs

hdfs dfs -chown -R root:hdfs /user

exit

 

avatar
Master Collaborator

No, you definitely do not want to take this dir away from hdfs! in general I'd never mess with the HDFS permissions for key dirs like this. Instead, hdfs needs to make a directory for your user. This kind of stuff happens automatically via Hue.

avatar
Champion
@justin3113 to run jobs across all nodes a user must exist on each node, I'd justin3113 for example. And each user needs a HDFS user directory under /user in HDFS, the user must have read and write access. This is so the job can write temporary data to HDFS from whatever node the job is running. The error is stating that it is trying to create that user directory but only the hdfs user has that permission. Opening up access gets around it but that is not advisable. You should run for each user su - hdfs hdfs dfs -mkdir /user/justin3113.