Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to properly execute spark-submit command with Yarn?

avatar
Contributor

I should execute `spark-submit` in the Hadoop cluster created with Ambari. There are 3 instances: 1 master node and 2 executer nodes.

So, I logged in the master node as `centos` user and executed this command:

sudo -u hdfs spark-submit --master yarn --deploy-mode cluster --driver-memory 6g  --executor-memory 4g --executor-cores 2 --class org.tests.GraphProcessor graph.jar

But I got the error message that the file graph.jar does not exist. Therefore I tried to copy this file to HDFS as follows:

hdfs dfs -put graph.jar /home/hdfs/tmp

However, the error is:

No such file or directory: `hdfs://eureambarimaster1.local.eurecat.org:8020/home/hdfs/tmp'
1 ACCEPTED SOLUTION

avatar
Contributor

I had to run this command to adjust permissions:

sudo -u hdfs hdfs dfs -chown centos:centos /user

After this I was able to run:

spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar

View solution in original post

6 REPLIES 6

avatar

@Liana Napalkova The graph.jar will be automatically copied to hdfs and distribute by the spark client. You only need to point to the location of graph.jar in the local file system. For example:

spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar

HTH

*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

avatar
Contributor
If I do this way (with "sudo -u hdfs"), the jar file is invisible for hdfs user (I get an error message)). But if I run without "sudo -u hdfs", then yarn mode cannot be entered. I think that it's the matter of permissions. But it's not clear to me how to solve this issue in a most correct way. Thanks.

avatar
Contributor

In particular, if I do this way, I get the following error: Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=centos, access=WRITE, inode="/user/centos/.sparkStaging/application_1523903913760_0007":hdfs:hdfs:drwxr-xr-x

avatar

@Liana Napalkova

You should set correct permissions for /user/centos directory.

hdfs dfs -chown centos:centos /user/centos

If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

avatar
Contributor

I had to run this command to adjust permissions:

sudo -u hdfs hdfs dfs -chown centos:centos /user

After this I was able to run:

spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar

avatar

@Liana Napalkova I advice against changing ownership of hdfs /usr directory

You should set correct permissions for /user/centos directory.

hdfs dfs -chown centos:centos /user/centos

HTH