A Spark job fails with INTERNAL_FAILURE. In the WA (Workload Analytics) page of the job that failed, the following message is reported:
org.apache.spark.SparkException: Application application_1503474791091_0002 finished with failed status
As the Telemetry Publisher didn't retrieve the application log due to a known bug, we have to diagnose the application logs (application_1503474791091_0002) directly, which are stored in the user's S3 bucket.
If the following exception is found, it indicates that the application failed to resolve a dependency in the Hadoop class path:
17/08/24 13:13:33 INFO ApplicationMaster: Preparing Local resources Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.tracing.TraceUtils.wrapHadoopConf(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/htrace/core/HTraceConfiguration; at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:687) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:671) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
This most likely occurred because the jar may have been built using the another Hadoop distribution's repository, for example EMR (Amazon Elastic MapReduce)