Member since
06-01-2016
15
Posts
1
Kudos Received
0
Solutions
06-02-2016
04:02 PM
3 Kudos
I have faced this issue numerous times as well: "“WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources” The problem was with Dynamic Resource Allocation over allocating. After turning off Dynamic Resource allocation and then specifying number of executors, executor memory, and cores, my jobs were running. Turn off Dynamic Resource Allocation: conf = (SparkConf() .setAppName("my_job_name") .set("spark.shuffle.service.enabled", "false") .set("spark.dynamicAllocation.enabled", "false") .set("spark.io.compression.codec", "snappy") .set("spark.rdd.compress", "true")) sc = SparkContext(conf = conf) Give values with spark submit (you could also set these in SparkConf as well): /usr/hdp/2.3.4.0-3485/spark/bin/spark-submit --master yarn --deploy-mode client /home/ec2-user/scripts/validate_employees.py --driver-memory 3g --executor-memory 3g --num-executors 4 --executor-cores 2
... View more
06-02-2016
02:26 PM
@omar harb If RM UI showing enough available resources then I would suggest you to stop the spark application and run it again with below options. spark-submit --class com.Spark.MainClass --master yarn-client /home/Test-0.0.1-SNAPSHOT.jar --executor-cores 2 --executor-memory 2g --num-executors 2 If you still see same issue then kindly share the Spark AM logs from this screen for that job. clusterurl.png
... View more
06-01-2016
03:22 PM
@omar harb This post provides info on resetting the admin password and also provides some background on the decision. https://community.hortonworks.com/questions/20960/no-admin-permission-for-the-latest-sandbox-of-24.html
... View more