- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to properly execute spark-submit command with Yarn?
Created ‎05-04-2018 11:36 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I should execute `spark-submit` in the Hadoop cluster created with Ambari. There are 3 instances: 1 master node and 2 executer nodes.
So, I logged in the master node as `centos` user and executed this command:
sudo -u hdfs spark-submit --master yarn --deploy-mode cluster --driver-memory 6g --executor-memory 4g --executor-cores 2 --class org.tests.GraphProcessor graph.jar
But I got the error message that the file graph.jar does not exist. Therefore I tried to copy this file to HDFS as follows:
hdfs dfs -put graph.jar /home/hdfs/tmp
However, the error is:
No such file or directory: `hdfs://eureambarimaster1.local.eurecat.org:8020/home/hdfs/tmp'
Created ‎05-04-2018 05:01 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had to run this command to adjust permissions:
sudo -u hdfs hdfs dfs -chown centos:centos /user
After this I was able to run:
spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar
Created ‎05-04-2018 12:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Liana Napalkova The graph.jar will be automatically copied to hdfs and distribute by the spark client. You only need to point to the location of graph.jar in the local file system. For example:
spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar
HTH
*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
Created ‎05-04-2018 02:43 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If I do this way (with "sudo -u hdfs"), the jar file is invisible for hdfs user (I get an error message)). But if I run without "sudo -u hdfs", then yarn mode cannot be entered. I think that it's the matter of permissions. But it's not clear to me how to solve this issue in a most correct way. Thanks.
Created ‎05-04-2018 04:56 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In particular, if I do this way, I get the following error: Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=centos, access=WRITE, inode="/user/centos/.sparkStaging/application_1523903913760_0007":hdfs:hdfs:drwxr-xr-x
Created ‎05-04-2018 05:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should set correct permissions for /user/centos directory.
hdfs dfs -chown centos:centos /user/centos
If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
Created ‎05-04-2018 05:01 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had to run this command to adjust permissions:
sudo -u hdfs hdfs dfs -chown centos:centos /user
After this I was able to run:
spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar
Created ‎05-04-2018 05:10 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Liana Napalkova I advice against changing ownership of hdfs /usr directory
You should set correct permissions for /user/centos directory.
hdfs dfs -chown centos:centos /user/centos
HTH
