Created on 02-08-2016 09:01 PM - edited 09-16-2022 03:03 AM
I am trying to run an application in yarn cluster mode.
Here are setting of the shell script:
spark-submit --class "com.Myclass" \
--num-executors 2 \
--executor-cores 2 \
--master yarn \
--supervise \
--deploy-mode cluster \
../target/ \
My application is working fine in yarn-client and local mode.
EXCERPT FROM SPARK-SUBMIT UNDER YARN CLUSTER MODE
&&&&&&&&&&&&&&&&&&&&&& HADOOP HOME
/usr/lib/hadoop
&&&&&&&&&&&&&&&&&&&&&& HADOOP_CONF_DIR
/usr/lib/hadoop/etc/hadoop
...
Diagnostics: Exception from container-launch.
Container id: container_1454984479786_0006_02_000001
Exit code: 15
Stack trace: ExitCodeException exitCode=15:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:720)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
Further I am getting following error
ERROR DETAILS FROM YARN LOGS APPLICATIONID
INFO : org.apache.spark.deploy.yarn.ApplicationMaster - Registered signal handlers for [TERM, HUP, INT]
DEBUG: org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:307)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:332)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
at org.apache.hadoop.yarn.conf.YarnConfiguration.<clinit>(YarnConfiguration.java:590)
at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.newConfiguration(YarnSparkHadoopUtil.scala:62)
at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:52)
at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.<init>(YarnSparkHadoopUtil.scala:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:386)
at org.apache.spark.deploy.SparkHadoopUtil$.yarn$lzycompute(SparkHadoopUtil.scala:384)
at org.apache.spark.deploy.SparkHadoopUtil$.yarn(SparkHadoopUtil.scala:384)
at org.apache.spark.deploy.SparkHadoopUtil$.get(SparkHadoopUtil.scala:401)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:623)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
I tried modifying spark-env.sh like following and I see Hadoop_Home logged but still getting error:
Modified added following entries to spark-env.sh
export HADOOP_HOME="/usr/lib/hadoop"
echo "&&&&&&&&&&&&&&&&&&&&&& HADOOP HOME "
echo "$HADOOP_HOME"
export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
echo "&&&&&&&&&&&&&&&&&&&&&& HADOOP_CONF_DIR "
echo "$HADOOP_CONF_DIR"
Created 02-15-2016 05:36 AM
Hi,
Could you please details like which kind of installation it is ( like RPM based, tar ball or automated installation ) and which version of CDH you are using?
Created on 06-07-2020 12:37 AM - edited 06-07-2020 12:42 AM
i have the same issue i'm using spark 2.4.4 and hive 3.1.2 and hadoop 3.2.1: -
error message GIven below -------------> i'm doing scala sbt project
13:03:38.626 [main] DEBUG org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:469) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:440) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:517) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:2327) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:365) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at java.lang.Class.forName0(Native Method) [na:1.8.0_252]
at java.lang.Class.forName(Class.java:348) [na:1.8.0_252]
at org.apache.spark.util.Utils$.classForName(Utils.scala:238) [spark-core_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession$.hiveClassesArePresent(SparkSession.scala:1117) [spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession$Builder.enableHiveSupport(SparkSession.scala:866) [spark-sql_2.11-2.4.4.jar:2.4.4]
at UpsertFeature$.<init>(UpsertFeature.scala:20) [classes/:na]
at UpsertFeature$.<clinit>(UpsertFeature.scala) [classes/:na]
at UpsertFeature.main(UpsertFeature.scala) [classes/:na]
13:03:38.788 [main] DEBUG org.apache.hadoop.util.Shell - setsid exited with exit code 0
Exception in thread "main" java.lang.ExceptionInInitializerError