Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Phoenix driver not found in Spark job

avatar
Rising Star

I've created a Spark streaming application (and swear a month or 2 ago I had this working) and it runs fine in Eclipse. When I run the job using spark-submit and specify the --jars including my application jars and /usr/hdp/current/phoenix-client/phoenix-client.jar (or skip the link and use /usr/hdp/current/phoenix-4.7.0.2.5.3.0-37-client.jar) I get a error indicating classNotFound: org.apache.phoenix.jdbc.PhoenixDriver.

In the YARN log output I can see in directory.info the following entries:

lrwxrwxrwx 1 yarn hadoop 70 Mar 7 15:37 phoenix-client.jar -> /hadoop/yarn/local/usercache/jwatson/filecache/2288/phoenix-client.jar

3016594 100180 -r-x------ 1 yarn hadoop 102581542 Mar 7 15:37 ./phoenix-client.jar

in launch_container.sh I see the following:

ln -sf "/hadoop/yarn/local/usercache/jwatson/filecache/2288/phoenix-client.jar" "phoenix-client.jar"

So it seems the right things are happening. I finally broke down and put the following in the driver to see what I got for class files:

ClassLoader cl = ClassLoader.getSystemClassLoader(); URL[] urls = ((URLClassLoader)cl).getURLs(); for (URL url: urls) System.out.println(url.getFile());

And it shows none of the jar files I added via the --jars command for spark-submit. What am I missing.

As a corollary, should we build a fatjar instead and toss everything in that? What's the most efficient approach to not having to copy jar files that are already on the cluster servers (HDP 2.5.3)?

13 REPLIES 13

avatar
Rising Star

For kicks, I converted from spark2 to spark(1), same error.

avatar
Super Collaborator

@Jeff Watson

Can you give us the command for the spark-submit and also attach the console o/p in here for us to check ?

avatar
Rising Star

Here is the (wordy) spark-submit command (line breaks added for clarity):

spark-submit --conf spark.driver.extraClassPath=__app__.jar:e2parser-1.0.jar:f18parser-1.0.jar:mdanparser-1.0.jar:regimerecog-1.0.jar:tsvparser-1.0.jar:xmlparser-1.0.jar:log4j.script.properties:common-1.0.jar:aws-java-sdk-1.11.40.jar:aws-java-sdk-s3-1.11.40.jar:jackson-annotations-2.6.5.jar:jackson-core-2.6.5.jar:jackson-databind-2.6.5.jar:jackson-module-paranamer-2.6.5.jar:jackson-module-scala_2.10-2.6.5.jar:miglayout-swing-4.2.jar:commons-configuration-1.6.jar:xml-security-impl-1.0.jar:metrics-core-2.2.0.jar:jcommon-1.0.0.jar:ojdbc6.jar:jopt-simple-4.5.jar:ucanaccess-3.0.1.jar:httpcore-nio-4.4.5.jar:nifi-site-to-site-client-1.0.0.jar:nifi-spark-receiver-1.0.0.jar:commons-compiler-2.7.8.jar:janino-2.7.8.jar:hsqldb-2.3.1.jar:pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:slf4j-api-1.7.21.jar:slf4j-log4j12-1.7.21.jar:slf4j-simple-1.7.21.jar:snappy-java-1.1.1.7.jar:snakeyaml-1.7.jar:/usr/hdp/current/hadoop-client/client/hadoop-common.jar:/usr/hdp/current/hadoop-client/client/hadoop-mapreduce-client-core.jar:/usr/hdp/current/hadoop-client/client/jetty-util.jar:/usr/hdp/current/hadoop-client/client/netty-all-4.0.23.Final.jar:/usr/hdp/current/hadoop-client/client/paranamer-2.3.jar:/usr/hdp/current/hadoop-client/lib/commons-cli-1.2.jar:/usr/hdp/current/hadoop-client/lib/httpclient-4.5.2.jar:/usr/hdp/current/hadoop-client/lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hadoop-client/lib/joda-time-2.8.1.jar:/usr/hdp/current/hadoop-client/lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-client/lib/hbase-client.jar:/usr/hdp/current/hbase-client/lib/hbase-common.jar:/usr/hdp/current/hbase-client/lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/usr/hdp/current/hbase-client/lib/hbase-server.jar:/usr/hdp/current/hbase-client/lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hive-client/lib/antlr-runtime-3.4.jar:/usr/hdp/current/hive-client/lib/commons-collections-3.2.2.jar:/usr/hdp/current/hive-client/lib/commons-dbcp-1.4.jar:/usr/hdp/current/hive-client/lib/commons-pool-1.5.4.jar:/usr/hdp/current/hive-client/lib/datanucleus-api-jdo-4.2.1.jar:/usr/hdp/current/hive-client/lib/datanucleus-core-4.1.6.jar:/usr/hdp/current/hive-client/lib/datanucleus-rdbms-4.1.7.jar:/usr/hdp/current/hive-client/lib/geronimo-jta_1.1_spec-1.1.1.jar:/usr/hdp/current/hive-client/lib/hive-exec.jar:/usr/hdp/current/hive-client/lib/hive-jdbc.jar:/usr/hdp/current/hive-client/lib/hive-metastore.jar:/usr/hdp/current/hive-client/lib/jdo-api-3.0.1.jar:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/spark-client/lib/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar \

--conf spark.executor.extraClassPath=<same-as-above> --master yarn --deploy-mode cluster --class <app.class.name> <my-jar-file>

avatar
Rising Star

Here is the output:

2017-06-28 21:28:13 INFO  ReloadInputReader          Connection: jdbc:phoenix:master:2181:/hbase-unsecure;
2017-06-28 21:28:13 INFO  ReloadInputReader          Ingest DBC: jdbc:phoenix:master:2181:/hbase-unsecure
2017-06-28 21:28:13 INFO  ReloadInputReader          driver host name: master.vm.local
2017-06-28 21:28:13 INFO  ReloadInputReader          Zookeeper quorum: master
2017-06-28 21:28:13 INFO  ReloadInputReader          Reload query: SELECT FILE_NAME, TM, DATASET, WORKER_NAME, FILE_CONTENTS FROM JOBS.FILE_CONTENTS WHERE FILE_NAME in (SELECT FILE_NAME FROM JOBS.file_loaded WHERE file_name='B162836D20090316T0854.AAD')
2017-06-28 21:28:13 INFO  MemoryStore                Block broadcast_1 stored as values in memory (estimated size 428.6 KB, free 465.7 KB)
2017-06-28 21:28:13 INFO  MemoryStore                Block broadcast_1_piece0 stored as bytes in memory (estimated size 34.9 KB, free 500.7 KB)
2017-06-28 21:28:13 INFO  BlockManagerInfo           Added broadcast_1_piece0 in memory on 192.168.56.2:51844 (size: 34.9 KB, free: 457.8 MB)
2017-06-28 21:28:13 INFO  SparkContext               Created broadcast 1 from newAPIHadoopRDD at ReloadInputReader.java:135
2017-06-28 21:28:14 INFO  SparkContext               Starting job: foreach at ReloadInputReader.java:137
2017-06-28 21:28:14 INFO  DAGScheduler               Got job 0 (foreach at ReloadInputReader.java:137) with 1 output partitions
2017-06-28 21:28:14 INFO  DAGScheduler               Final stage: ResultStage 0 (foreach at ReloadInputReader.java:137)
2017-06-28 21:28:14 INFO  DAGScheduler               Parents of final stage: List()
2017-06-28 21:28:14 INFO  DAGScheduler               Missing parents: List()
2017-06-28 21:28:14 INFO  DAGScheduler               Submitting ResultStage 0 (NewHadoopRDD[0] at newAPIHadoopRDD at ReloadInputReader.java:135), which has no missing parents
2017-06-28 21:28:14 INFO  MemoryStore                Block broadcast_2 stored as values in memory (estimated size 2.9 KB, free 503.6 KB)
2017-06-28 21:28:14 INFO  MemoryStore                Block broadcast_2_piece0 stored as bytes in memory (estimated size 1845.0 B, free 505.4 KB)
2017-06-28 21:28:14 INFO  BlockManagerInfo           Added broadcast_2_piece0 in memory on 192.168.56.2:51844 (size: 1845.0 B, free: 457.8 MB)
2017-06-28 21:28:14 INFO  SparkContext               Created broadcast 2 from broadcast at DAGScheduler.scala:1008
2017-06-28 21:28:14 INFO  DAGScheduler               Submitting 1 missing tasks from ResultStage 0 (NewHadoopRDD[0] at newAPIHadoopRDD at ReloadInputReader.java:135)
2017-06-28 21:28:14 INFO  YarnClusterScheduler       Adding task set 0.0 with 1 tasks
2017-06-28 21:28:14 INFO  TaskSetManager             Starting task 0.0 in stage 0.0 (TID 0, master.vm.local, partition 0,PROCESS_LOCAL, 2494 bytes)
2017-06-28 21:28:18 INFO  BlockManagerInfo           Added broadcast_2_piece0 in memory on master.vm.local:40246 (size: 1845.0 B, free: 511.1 MB)
2017-06-28 21:28:18 INFO  BlockManagerInfo           Added broadcast_1_piece0 in memory on master.vm.local:40246 (size: 34.9 KB, free: 511.1 MB)
2017-06-28 21:28:20 WARN  TaskSetManager             Lost task 0.0 in stage 0.0 (TID 0, master.vm.local): java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:master:2181:/hbase-unsecure;
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:134)
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:71)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:156)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:129)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:64)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: No suitable driver found for jdbc:phoenix:master:2181:/hbase-unsecure;
    at java.sql.DriverManager.getConnection(DriverManager.java:689)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:98)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:57)
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:116)
    ... 12 more

avatar
Rising Star

FYI, I added some code that in other parts of app forced the Phoenix JDBC driver to load, but it doesn't seem to be working in this context. The call to ConnectionUtil.getInputConnection(conf, props) is the code I tracked down in the stack trace below that builds the connection to verify I was getting the correct connection (it is the valid JDBC URL).

        final Configuration configuration = HBaseConfiguration.create();
        configuration.set(HConstants.ZOOKEEPER_CLIENT_PORT, "2181");
        configuration.set(HConstants.ZOOKEEPER_ZNODE_PARENT, quorumParentNode);
        configuration.set(HConstants.ZOOKEEPER_QUORUM, quorum);
        Properties props = new Properties();
        Connection conn = ConnectionUtil.getInputConnection(configuration, props);
        log.info("Connection: " + conn.getMetaData().getURL());
        log.info("Ingest DBC: " + ingestDbConn);

        log.info("driver host name: " + driverHost);
        log.info("Zookeeper quorum: " + quorum);
        log.info("Reload query: " + sqlQuery);
        PhoenixConfigurationUtil.setPhysicalTableName(configuration, FileContentsWritable.TABLE_NAME);
        PhoenixConfigurationUtil.setInputTableName(configuration , FileContentsWritable.TABLE_NAME);
        PhoenixConfigurationUtil.setOutputTableName(configuration , FileContentsWritable.TABLE_NAME);
        PhoenixConfigurationUtil.setInputQuery(configuration, sqlQuery);
        PhoenixConfigurationUtil.setInputClass(configuration , FileContentsWritable.class);
        PhoenixConfigurationUtil.setUpsertColumnNames(configuration, FileContentsWritable.COLUMN_NAMES);
        Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
        @SuppressWarnings("unchecked")
        JavaPairRDD<NullWritable, FileContentsWritable> fileContentsRDD = sparkContext.newAPIHadoopRDD(configuration, PhoenixInputFormat.class, NullWritable.class, FileContentsWritable.class);

        fileContentsRDD.foreach(rdd ->
        {
            Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
            FileContentsBean fileContentsBean = rdd._2.getFileContentsBean();
            :
            :
        };
	

avatar
Rising Star

Here is the slightly longer exception that is logged by the outer part of my application

2017-06-28 21:28:25 ERROR AppDriver                  Fatal exception encountered.  Job aborted.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, master.vm.local): java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:master:2181:/hbase-unsecure;
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:134)
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:71)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:156)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:129)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:64)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: No suitable driver found for jdbc:phoenix:master:2181:/hbase-unsecure;
    at java.sql.DriverManager.getConnection(DriverManager.java:689)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:98)
    at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:57)
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:116)
    ... 12 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1856)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1869)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1882)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1953)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:919)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:917)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:323)
    at org.apache.spark.rdd.RDD.foreach(RDD.scala:917)
    at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332)
    at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46)
    at mil.navy.navair.sdr.common.framework.ReloadInputReader.readInputInDriver(ReloadInputReader.java:137)
Caused by: java.lang.RuntimeException: java.sql.SQLException: No suitable driver found for jdbc:phoenix:master:2181:/hbase-unsecure;
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:134)
    at org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:71)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:156)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:129)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:64)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

avatar
Rising Star

One last point, the driver upstream from the above connects to Phoenix successfully during the app startup. The code above is where we are querying Phoenix with the SQL query shown to pull rows and kick off an RDD per row returned. It seems like we enter a different context in the call to sparkContext.newAPIHadoopRDD() and the foreach(rdd -> ....) and the stack trace gives me the impression we are (duh) somewhere between the driver and the executors that are trying to instantiate the Phoenix driver.

In other parts of the code I had to add a Class.forName("org.apache.phoenix.jdbc.PhoenixDriver") to get rid of this exception, thus I added that code prior to creating the Java Spark Context, before the call to newAPIHadoopRDD() and in the start of the foreach(rdd ->....), but to no avail.

avatar
Rising Star

One final note: this code runs successfully in Eclipse using --master local[*], it's just the YARN cluster mode where things break down. Go figure.

avatar
Rising Star

I forgot I ripped the --jars part out of the spark-submit above because the text was too long. Here's that part. I'm seeing on some other gripes in StackOverflow about database drivers missing points to the driver not being included in the class path. As you can see in the --conf spark.driver.extraClassPath, --conf spark.executor.extraClassPath, and --jars, I tried to provide the /usr/hdp/current/phoenix-client/phoenix-client.jar driver in all contexts.

spark-submit --jars /home/jwatson/sdr/bin/e2parser-1.0.jar,/home/jwatson/sdr/bin/f18parser-1.0.jar,/home/jwatson/sdr/bin/mdanparser-1.0.jar,/home/jwatson/sdr/bin/regimerecog-1.0.jar,/home/jwatson/sdr/bin/tsvparser-1.0.jar,/home/jwatson/sdr/bin/xmlparser-1.0.jar,/home/jwatson/sdr/bin/aws-java-sdk-1.11.40.jar,/home/jwatson/sdr/bin/aws-java-sdk-s3-1.11.40.jar,/home/jwatson/sdr/bin/jackson-annotations-2.6.5.jar,/home/jwatson/sdr/bin/jackson-core-2.6.5.jar,/home/jwatson/sdr/bin/jackson-databind-2.6.5.jar,/home/jwatson/sdr/bin/jackson-module-paranamer-2.6.5.jar,/home/jwatson/sdr/bin/jackson-module-scala_2.10-2.6.5.jar,/home/jwatson/sdr/bin/miglayout-swing-4.2.jar,/home/jwatson/sdr/bin/commons-configuration-1.6.jar,/home/jwatson/sdr/bin/xml-security-impl-1.0.jar,/home/jwatson/sdr/bin/metrics-core-2.2.0.jar,/home/jwatson/sdr/bin/jcommon-1.0.0.jar,/home/jwatson/sdr/bin/ojdbc6.jar,/home/jwatson/sdr/bin/jopt-simple-4.5.jar,/home/jwatson/sdr/bin/ucanaccess-3.0.1.jar,/home/jwatson/sdr/bin/httpcore-nio-4.4.5.jar,/home/jwatson/sdr/bin/nifi-site-to-site-client-1.0.0.jar,/home/jwatson/sdr/bin/nifi-spark-receiver-1.0.0.jar,/home/jwatson/sdr/bin/commons-compiler-2.7.8.jar,/home/jwatson/sdr/bin/janino-2.7.8.jar,/home/jwatson/sdr/bin/hsqldb-2.3.1.jar,/home/jwatson/sdr/bin/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar,/home/jwatson/sdr/bin/slf4j-api-1.7.21.jar,/home/jwatson/sdr/bin/slf4j-log4j12-1.7.21.jar,/home/jwatson/sdr/bin/slf4j-simple-1.7.21.jar,/home/jwatson/sdr/bin/snappy-java-1.1.1.7.jar,/home/jwatson/sdr/bin/snakeyaml-1.7.jar,local://usr/hdp/current/hadoop-client/client/hadoop-common.jar,local://usr/hdp/current/hadoop-client/client/hadoop-mapreduce-client-core.jar,local://usr/hdp/current/hadoop-client/client/jetty-util.jar,local://usr/hdp/current/hadoop-client/client/netty-all-4.0.23.Final.jar,local://usr/hdp/current/hadoop-client/client/paranamer-2.3.jar,local://usr/hdp/current/hadoop-client/lib/commons-cli-1.2.jar,local://usr/hdp/current/hadoop-client/lib/httpclient-4.5.2.jar,local://usr/hdp/current/hadoop-client/lib/jetty-6.1.26.hwx.jar,local://usr/hdp/current/hadoop-client/lib/joda-time-2.8.1.jar,local://usr/hdp/current/hadoop-client/lib/log4j-1.2.17.jar,local://usr/hdp/current/hbase-client/lib/hbase-client.jar,local://usr/hdp/current/hbase-client/lib/hbase-common.jar,local://usr/hdp/current/hbase-client/lib/hbase-hadoop-compat.jar,local://usr/hdp/current/hbase-client/lib/hbase-protocol.jar,local://usr/hdp/current/hbase-client/lib/hbase-server.jar,local://usr/hdp/current/hbase-client/lib/protobuf-java-2.5.0.jar,local://usr/hdp/current/hive-client/lib/antlr-runtime-3.4.jar,local://usr/hdp/current/hive-client/lib/commons-collections-3.2.2.jar,local://usr/hdp/current/hive-client/lib/commons-dbcp-1.4.jar,local://usr/hdp/current/hive-client/lib/commons-pool-1.5.4.jar,local://usr/hdp/current/hive-client/lib/datanucleus-api-jdo-4.2.1.jar,local://usr/hdp/current/hive-client/lib/datanucleus-core-4.1.6.jar,local://usr/hdp/current/hive-client/lib/datanucleus-rdbms-4.1.7.jar,local://usr/hdp/current/hive-client/lib/geronimo-jta_1.1_spec-1.1.1.jar,local://usr/hdp/current/hive-client/lib/hive-exec.jar,local://usr/hdp/current/hive-client/lib/hive-jdbc.jar,local://usr/hdp/current/hive-client/lib/hive-metastore.jar,local://usr/hdp/current/hive-client/lib/jdo-api-3.0.1.jar,local://usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar,local://usr/hdp/current/phoenix-client/phoenix-client.jar,local://usr/hdp/current/spark-client/lib/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar