Created on 10-14-2014 09:34 PM - edited 09-16-2022 02:09 AM
Hello,
I have a CDH 5.1.3 cluster running on 4 nodes.
I am executing a TallSkinnySVD program (modified bit to run on big data).
When I execute it on cluster it always shows only one executor. I am specifying the number of executors in command but still it’s not working.
One more strange behaviour, I can see the process on http://<spark-host>:4040 but the same process is not listing on Spark UI (from CM on port 18080).
The command I am using is as below:
$ sudo -u hdfs spark-submit --executor-memory 3g --driver-memory 6g --num-executors 10 --class org.xyz.spark.examples.mllib.TallSkinnySVD --master spark://myhost:7077 target/sparkExamples-0.0.1-SNAPSHOT.jar hdfs://<hdfshost>:8020/user/shailesh/RData/data7K.csv false
Please let me know if I am missing anything ?
Thanks,
Shailesh
Created 10-15-2014 12:12 AM
You are using standalone mode, i.e. the "Spark" service and not YARN? Check to see that the workers are running and healthy. Did executors register at startup? Double-check they have the memory you think. If not, they may not be accepting work because they do not have the memory to allocate that you expect.
Created 10-14-2014 10:17 PM
Hello,
Got resolved with error of submitting job on cluster but now its giving different error. (Its was my fault in code).
Its not able submit job on worker. PFB the log.
----------------------------------
14/10/15 18:09:08 INFO DAGScheduler: Parents of final stage: List()
14/10/15 18:09:08 INFO DAGScheduler: Missing parents: List()
14/10/15 18:09:08 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[2] at map at TallSkinnySVD.scala:84), which has no missing parents
14/10/15 18:09:08 INFO DAGScheduler: Submitting 7 missing tasks from Stage 0 (MappedRDD[2] at map at TallSkinnySVD.scala:84)
14/10/15 18:09:08 INFO TaskSchedulerImpl: Adding task set 0.0 with 7 tasks
14/10/15 18:09:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:09:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:09:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:10:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:10:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:10:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
14/10/15 18:10:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
-------------------------------------
Please note I have set 4G Java heap for each node from CM. And there are no other tasks running on the cluster.
Not getting why its giving such error.
Thanks,
Shailesh
Created 10-15-2014 12:12 AM
You are using standalone mode, i.e. the "Spark" service and not YARN? Check to see that the workers are running and healthy. Did executors register at startup? Double-check they have the memory you think. If not, they may not be accepting work because they do not have the memory to allocate that you expect.
Created 10-16-2014 02:45 PM
Thanks Srowen.
After tunning the Java Heap memory for all nodes through CM and also increased the driver and worker memory to 6GB and 3GB resp the "TaskSchedulerImpl: Initial job has not accepted any resources" issue got resolved.
Regards,
Shailesh
Created 10-15-2014 12:09 AM
TallSkinnySVD calls RowMatrix.computeSVD, and by default it will decide whether to run the computation locally or not. The defaults may be causing the driver to run the computation only, depending on your data.