Support Questions

Find answers, ask questions, and share your expertise

Using spark to to run a query

New Contributor

I am trying to get our ETL software to connect to a Hadoop DB. We can connect to the server as we uploaded test data to the server and downloaded it. The issue is when we try to execute a SQL query, we get

ERROR: Error occurred during execution of the Spark application: Application application_1504199746026_12204 failed 2 times due to AM Container for appattempt_1504199746026_12204_000002 exited with exitCode: 1

Diagnostics: Exception from container-launch. Container id: container_1504199746026_12204_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:600) at org.apache.hadoop.util.Shell.run(Shell.java:511) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:783) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.__launchContainer__(LinuxContainerExecutor.java:371) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

2 REPLIES 2

Hi @John Malinowski,

Can you please give more info related to this. How are you running the spark application (pyspark etc).

Please check that the sql query doesn't have any errors.

Thanks,

Aditya

New Contributor

I am running it through a third party app called Lavastorm.

Here are the results when running the spark-submit command:

[jmalin200@server spark]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 lib/spark-examples*.jar 10 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.4.3.2-1/hadoop/lib/ SLF4J-log4j12-1.7.10.jar!/org/ SLF4J/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.4.3.2-1/spark/lib/spark-assembly-1.6.2.2.4.3.2-1-hadoop2.7.1.2.4.3.2-1.jar!/org/ SLF4J/impl/StaticLoggerBinder.class] SLF4J: See http://www. SLF4J.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org. SLF4J.impl.Log4jLoggerFactory] 17/09/21 18:10:45 WARN spark.SparkConf: The configuration key 'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 and and may be removed in the future. Please use the new key 'spark.yarn.am.waitTime' instead. 17/09/21 18:10:45 WARN spark.SparkConf: The configuration key 'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 and and may be removed in the future. Please use the new key 'spark.yarn.am.waitTime' instead. 17/09/21 18:10:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/09/21 18:10:47 INFO impl.TimelineClientImpl: Timeline service address: http://server.domain.net:8188/ws/v1/timeline/ 17/09/21 18:10:48 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 17/09/21 18:10:48 INFO resource.N: Set a new configuration for the first time. 17/09/21 18:10:48 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 17/09/21 18:10:48 INFO resource.u: Scheduling statistics report every 2000 millisecs 17/09/21 18:10:49 WARN ipc.Client: Failed to connect to server: server.domain.net/172.27.30.134:8032: retries get failed due to exceeded maximum allowed retries number: 0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:649) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:744) at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) at org.apache.hadoop.ipc.Client.call(Client.java:1431) at org.apache.hadoop.ipc.Client.call(Client.java:1392) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy18.getClusterMetrics(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:501) at org.apache.spark.deploy.yarn.Client$anonfun$submitApplication$1.apply(Client.scala:130) at org.apache.spark.deploy.yarn.Client$anonfun$submitApplication$1.apply(Client.scala:130) at org.apache.spark.Logging$class.logInfo(Logging.scala:58) at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:129) at org.apache.spark.deploy.yarn.Client.run(Client.scala:1109) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1169) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 17/09/21 18:10:49 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 17/09/21 18:10:49 INFO yarn.Client: Requesting a new application from cluster with 14 NodeManagers 17/09/21 18:10:49 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (16640 MB per container) 17/09/21 18:10:49 INFO yarn.Client: Will allocate AM container, with 4480 MB memory including 384 MB overhead 17/09/21 18:10:49 INFO yarn.Client: Setting up container launch context for our AM 17/09/21 18:10:49 INFO yarn.Client: Setting up the launch environment for our AM container 17/09/21 18:10:49 INFO yarn.Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly: hdfs://serverdevcluster/hdp/apps/2.4.3.2-1/spark/spark-hdp-assembly.jar 17/09/21 18:10:49 INFO yarn.Client: Preparing resources for our AM container 17/09/21 18:10:49 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://serverdevcluster/user/jmalin200/.sparkStaging/application_1504199746026_11982 17/09/21 18:10:49 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 201986 for jmalin200 on ha-hdfs:serverdevcluster 17/09/21 18:10:51 INFO hive.metastore: Trying to connect to metastore with URI thrift://server.domain.net:9083 17/09/21 18:10:51 INFO hive.metastore: Connected to metastore. 17/09/21 18:10:53 INFO yarn.YarnSparkHadoopUtil: HBase class not found java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration 17/09/21 18:10:53 INFO yarn.Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly: hdfs://serverdevcluster/hdp/apps/2.4.3.2-1/spark/spark-hdp-assembly.jar 17/09/21 18:10:53 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://serverdevcluster/hdp/apps/2.4.3.2-1/spark/spark-hdp-assembly.jar 17/09/21 18:10:53 INFO yarn.Client: Uploading resource file:/usr/hdp/2.4.3.2-1/spark/lib/spark-examples-1.6.2.2.4.3.2-1-hadoop2.7.1.2.4.3.2-1.jar -> hdfs://serverdevcluster/user/jmalin200/.sparkStaging/application_1504199746026_11982/spark-examples-1.6.2.2.4.3.2-1-hadoop2.7.1.2.4.3.2-1.jar 17/09/21 18:10:56 INFO yarn.Client: Uploading resource file:/etc/spark/conf/metrics.properties -> hdfs://serverdevcluster/user/jmalin200/.sparkStaging/application_1504199746026_11982/metrics.properties 17/09/21 18:10:56 INFO yarn.Client: Uploading resource file:/tmp/spark-be01299e-c11f-4a41-8552-53fcf4bdd6a6/__spark_conf__4717001082872230868.zip -> hdfs://serverdevcluster/user/jmalin200/.sparkStaging/application_1504199746026_11982/__spark_conf__4717001082872230868.zip 17/09/21 18:10:56 WARN yarn.Client: spark.yarn.am.extraJavaOptions will not take effect in cluster mode 17/09/21 18:10:56 INFO spark.SecurityManager: Changing view acls to: jmalin200 17/09/21 18:10:56 INFO spark.SecurityManager: Changing modify acls to: jmalin200 17/09/21 18:10:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jmalin200); users with modify permissions: Set(jmalin200) 17/09/21 18:10:56 INFO yarn.Client: Submitting application 11982 to ResourceManager 17/09/21 18:10:57 INFO impl.YarnClientImpl: Submitted application application_1504199746026_11982 17/09/21 18:10:58 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:10:58 INFO yarn.Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1506017456811 final status: UNDEFINED tracking URL: http://server.domain.net:8088/proxy/application_1504199746026_11982/ user: jmalin200 17/09/21 18:10:59 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:00 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:01 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:02 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:03 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:04 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:05 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:06 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:07 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:08 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:09 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:10 INFO yarn.Client: Application report for application_1504199746026_11982 (state: ACCEPTED) 17/09/21 18:11:11 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:11 INFO yarn.Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: N/A ApplicationMaster host: 172.27.30.141 ApplicationMaster RPC port: 0 queue: default start time: 1506017456811 final status: UNDEFINED tracking URL: http://server.domain:8088/proxy/application_1504199746026_11982/ user: jmalin200 17/09/21 18:11:12 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:13 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:14 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:15 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:16 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:17 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:18 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:19 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:20 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:21 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:22 INFO yarn.Client: Application report for application_1504199746026_11982 (state: RUNNING) 17/09/21 18:11:23 INFO yarn.Client: Application report for application_1504199746026_11982 (state: FINISHED) 17/09/21 18:11:23 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 172.27.30.141 ApplicationMaster RPC port: 0 queue: default start time: 1506017456811 final status: SUCCEEDED tracking URL: http://server.domain.net:8088/proxy/application_1504199746026_11982/ user: jmalin200 17/09/21 18:11:23 INFO util.ShutdownHookManager: Shutdown hook called 17/09/21 18:11:23 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-be01299e-c11f-4a41-8552-53fcf4bdd6a6

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.