Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Cannot run Druid quickstart job on HDP-2.6.0.3

avatar
Explorer

Hi all, Now I'm facing a problem in submitting the wikiticker job with Druid 0.9.2 bundled in HDP 2.6.0.3. (But there isn't any problem with standalone version of 0.10.0 in my local PC) However, I must use the HDP version for the compatibility to Apache Ambari and the rest of my existing cluster. This is how I submit the job.

[centos@dev-server1 druid]$ curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/wikiticker-index.json localhost:8090/druid/indexer/v1/task {"task":"index_hadoop_ wikiticker_2017-06-15T11:04:18.145Z"} But it appeared to be FAILED in Coordinator Console with the following error (I tried many times with a few different setting that I thought it might solve: e.g. change UNIX timezone to be my local time or add mapred.job.classloader in jobProperties as describe in this link, but to no avail):
2017-06-15T11:04:31,361 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.<wbr>ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_<wbr>hadoop_wikiticker_2017-06-<wbr>15T11:04:18.145Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.<wbr>InvocationTargetException
	at com.google.common.base.<wbr>Throwables.propagate(<wbr>Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.<wbr>HadoopTask.<wbr>invokeForeignLoader(<wbr>HadoopTask.java:204) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	at io.druid.indexing.common.task.<wbr>HadoopIndexTask.run(<wbr>HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	at io.druid.indexing.overlord.<wbr>ThreadPoolTaskRunner$<wbr>ThreadPoolTaskRunnerCallable.<wbr>call(ThreadPoolTaskRunner.<wbr>java:436) [druid-indexing-service-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.overlord.<wbr>ThreadPoolTaskRunner$<wbr>ThreadPoolTaskRunnerCallable.<wbr>call(ThreadPoolTaskRunner.<wbr>java:408) [druid-indexing-service-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at java.util.concurrent.<wbr>FutureTask.run(FutureTask.<wbr>java:266) [?:1.8.0_131]
	at java.util.concurrent.<wbr>ThreadPoolExecutor.runWorker(<wbr>ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.<wbr>ThreadPoolExecutor$Worker.run(<wbr>ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.<wbr>java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.<wbr>InvocationTargetException
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke(<wbr>NativeMethodAccessorImpl.java:<wbr>62) ~[?:1.8.0_131]
	at sun.reflect.<wbr>DelegatingMethodAccessorImpl.<wbr>invoke(<wbr>DelegatingMethodAccessorImpl.<wbr>java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.<wbr>invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.<wbr>HadoopTask.<wbr>invokeForeignLoader(<wbr>HadoopTask.java:201) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	... 7 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.mapreduce.<wbr>lib.input.<wbr>InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-<wbr>data-center.internal:8020/<wbr>user/druid/quickstart/<wbr>wikiticker-2015-09-12-sampled.<wbr>json
	at com.google.common.base.<wbr>Throwables.propagate(<wbr>Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexer.<wbr>DetermineHashedPartitionsJob.<wbr>run(<wbr>DetermineHashedPartitionsJob.<wbr>java:208) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.JobHelper.<wbr>runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.<wbr>HadoopDruidDetermineConfigurat<wbr>ionJob.run(<wbr>HadoopDruidDetermineConfigurat<wbr>ionJob.java:91) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.common.task.<wbr>HadoopIndexTask$<wbr>HadoopDetermineConfigInnerProc<wbr>essing.runTask(<wbr>HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke(<wbr>NativeMethodAccessorImpl.java:<wbr>62) ~[?:1.8.0_131]
	at sun.reflect.<wbr>DelegatingMethodAccessorImpl.<wbr>invoke(<wbr>DelegatingMethodAccessorImpl.<wbr>java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.<wbr>invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.<wbr>HadoopTask.<wbr>invokeForeignLoader(<wbr>HadoopTask.java:201) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	... 7 more
Caused by: org.apache.hadoop.mapreduce.<wbr>lib.input.<wbr>InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-<wbr>data-center.internal:8020/<wbr>user/druid/quickstart/<wbr>wikiticker-2015-09-12-sampled.<wbr>json
	at org.apache.hadoop.mapreduce.<wbr>lib.input.FileInputFormat.<wbr>singleThreadedListStatus(<wbr>FileInputFormat.java:323) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>lib.input.FileInputFormat.<wbr>listStatus(FileInputFormat.<wbr>java:265) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>lib.input.FileInputFormat.<wbr>getSplits(FileInputFormat.<wbr>java:387) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>lib.input.<wbr>DelegatingInputFormat.<wbr>getSplits(<wbr>DelegatingInputFormat.java:<wbr>115) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>JobSubmitter.writeNewSplits(<wbr>JobSubmitter.java:301) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>JobSubmitter.writeSplits(<wbr>JobSubmitter.java:318) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>JobSubmitter.<wbr>submitJobInternal(<wbr>JobSubmitter.java:196) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>Job$10.run(Job.java:1290) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>Job$10.run(Job.java:1287) ~[?:?]
	at java.security.<wbr>AccessController.doPrivileged(<wbr>Native Method) ~[?:1.8.0_131]
	at javax.security.auth.Subject.<wbr>doAs(Subject.java:422) ~[?:1.8.0_131]
	at org.apache.hadoop.security.<wbr>UserGroupInformation.doAs(<wbr>UserGroupInformation.java:<wbr>1866) ~[?:?]
	at org.apache.hadoop.mapreduce.<wbr>Job.submit(Job.java:1287) ~[?:?]
	at io.druid.indexer.<wbr>DetermineHashedPartitionsJob.<wbr>run(<wbr>DetermineHashedPartitionsJob.<wbr>java:116) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.JobHelper.<wbr>runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.<wbr>HadoopDruidDetermineConfigurat<wbr>ionJob.run(<wbr>HadoopDruidDetermineConfigurat<wbr>ionJob.java:91) ~[druid-indexing-hadoop-0.9.2.<wbr>2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.common.task.<wbr>HadoopIndexTask$<wbr>HadoopDetermineConfigInnerProc<wbr>essing.runTask(<wbr>HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.<wbr>NativeMethodAccessorImpl.<wbr>invoke(<wbr>NativeMethodAccessorImpl.java:<wbr>62) ~[?:1.8.0_131]
	at sun.reflect.<wbr>DelegatingMethodAccessorImpl.<wbr>invoke(<wbr>DelegatingMethodAccessorImpl.<wbr>java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.<wbr>invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.<wbr>HadoopTask.<wbr>invokeForeignLoader(<wbr>HadoopTask.java:201) ~[druid-indexing-service-0.9.<wbr>2.2.6.0.3-8.jar:0.9.2.2.6.0.3-<wbr>8]
	... 7 more
2017-06-15T11:04:31,375 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.<wbr>TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-<wbr>06-15T11:04:18.145Z] status changed to [FAILED].
2017-06-15T11:04:31,378 INFO [task-runner-0-priority-0] io.druid.indexing.worker.<wbr>executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-<wbr>06-15T11:04:18.145Z",
  "status" : "FAILED",
  "duration" : 6650
}
	

Please find attached log.txt for more information Thank you very much.

1 ACCEPTED SOLUTION

avatar
Contributor

Hi @Kamolphan Liwprasert

There is the following line in your logfile:

Caused by: java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-data-center.internal:8020/user/druid/quickstart/wikiticker-2015-09-12-sampled.json

Make sure you have your sample file there.

Regards,

Andres

View solution in original post

6 REPLIES 6

avatar
Contributor

Hi @Kamolphan Liwprasert

There is the following line in your logfile:

Caused by: java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-data-center.internal:8020/user/druid/quickstart/wikiticker-2015-09-12-sampled.json

Make sure you have your sample file there.

Regards,

Andres

avatar
Explorer

Hi Andres, I just put the file in HDFS, but the same error still occurs. Maybe I've done something wrong during the steps. I'm still working around.

Thank you very much for your kindly reply.

avatar
Expert Contributor

make sure the file has the correct read perm for use druid.

avatar
Explorer

Hi, Slim. Thank you for your replay. I made the correct file permission as I use druid user to put file there in HDFS. It turned out to be another problem ,apart from putting file to HDFS, about YARN queue manager which I have problem with HTTPS set up. As I tested in test environment, I rolled back to the snapshot before I got problem with HTTPS. I will tried druid again someday. Thank you Andres and Slim!

avatar
Explorer

Hi,

I'm also having issues running the wiki-demo on HDP 2.6:

2017-07-13T14:36:04,480 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Running job: job_1498736541391_0023
2017-07-13T14:36:37,574 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 running in uber mode : false
2017-07-13T14:36:37,576 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2017-07-13T14:36:37,590 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 failed with state FAILED due to: Application application_1498736541391_0023 failed 2 times due to AM Container for appattempt_1498736541391_0023_000002 exited with  exitCode: 1
For more detailed output, check the application tracking page: http://nn0.cluster.local:8088/cluster/app/application_1498736541391_0023 Then click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e104_1498736541391_0023_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:944)
	at org.apache.hadoop.util.Shell.run(Shell.java:848)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1142)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:237)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)




Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2017-07-13T14:36:37,612 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 0
2017-07-13T14:36:37,614 ERROR [task-runner-0-priority-0] io.druid.indexer.DetermineHashedPartitionsJob - Job failed: job_1498736541391_0023
2017-07-13T14:36:37,614 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[/tmp/druid-indexing/wikiticker/2017-07-13T143554.401Z_10bdd9105fc14992bd9462b9bc50f992]
2017-07-13T14:36:37,638 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikiticker_2017-07-13T14:35:54.402Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
Caused by: com.metamx.common.ISE: Job[class io.druid.indexer.DetermineHashedPartitionsJob] failed!
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
2017-07-13T14:36:37,644 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-07-13T14:35:54.402Z] status changed to [FAILED].
2017-07-13T14:36:37,647 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-07-13T14:35:54.402Z",
  "status" : "FAILED",
  "duration" : 38859
}
2017-07-13T14:36:37,659 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@70c491b8].
2017-07-13T14:36:37,659 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]
2017-07-13T14:36:37,660 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/nn0.cluster.local:8100]
2017-07-13T14:36:37,673 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@ad0bb4e].
2017-07-13T14:36:37,673 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@5546e754].
2017-07-13T14:36:37,675 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager
2017-07-13T14:36:37,679 INFO [main] org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@a323a5b{HTTP/1.1}{0.0.0.0:8100}
2017-07-13T14:36:37,681 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@1182413a{/,null,UNAVAILABLE}
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@5d71b500].
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@b428830].
2017-07-13T14:36:37,685 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@43a4a9e5].
2017-07-13T14:36:37,689 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@5b733ef7].
2017-07-13T14:36:37,690 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@c4d2c44].
2017-07-13T14:36:37,690 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-07-13T14:36:37,690 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-07-13T14:36:37,693 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15cf3a3038c008e closed
2017-07-13T14:36:37,693 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15cf3a3038c008e
2017-07-13T14:36:37,693 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@7a83ccd2].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.storage.hdfs.HdfsStorageAuthentication.stop()] on object[io.druid.storage.hdfs.HdfsStorageAuthentication@412ebe64].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@1f78d415].
2017-07-13T14:36:37,707 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@5dbbb292].
2017-07-13T14:36:37,710 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@5460edd3].
2017-07-13 14:36:37,743 pool-1-thread-1 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.hadoop.hdfs.LeaseRenewer.<clinit>(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:830)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:968)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1214)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2886)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2903)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)


I already tried these settings I found in a gist withou any luck:

        "jobProperties": {
            "mapreduce.job.classloader": true,
            "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop."
        }


Any recommendations?

avatar
Expert Contributor

This seems to be unrelated, can you please start new thread and make sure to attach the actual logs of the job and the task spec, Thanks.