<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Cannot run Druid quickstart job on HDP-2.6.0.3 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220906#M182780</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm also having issues running the wiki-demo on HDP 2.6:&lt;/P&gt;&lt;PRE&gt;2017-07-13T14:36:04,480 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Running job: job_1498736541391_0023
2017-07-13T14:36:37,574 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 running in uber mode : false
2017-07-13T14:36:37,576 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2017-07-13T14:36:37,590 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 failed with state FAILED due to: Application application_1498736541391_0023 failed 2 times due to AM Container for appattempt_1498736541391_0023_000002 exited with  exitCode: 1
For more detailed output, check the application tracking page: &lt;A href="http://nn0.cluster.local:8088/cluster/app/application_1498736541391_0023" target="_blank"&gt;http://nn0.cluster.local:8088/cluster/app/application_1498736541391_0023&lt;/A&gt; Then click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e104_1498736541391_0023_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:944)
	at org.apache.hadoop.util.Shell.run(Shell.java:848)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1142)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:237)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)




Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2017-07-13T14:36:37,612 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 0
2017-07-13T14:36:37,614 ERROR [task-runner-0-priority-0] io.druid.indexer.DetermineHashedPartitionsJob - Job failed: job_1498736541391_0023
2017-07-13T14:36:37,614 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[/tmp/druid-indexing/wikiticker/2017-07-13T143554.401Z_10bdd9105fc14992bd9462b9bc50f992]
2017-07-13T14:36:37,638 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikiticker_2017-07-13T14:35:54.402Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
Caused by: com.metamx.common.ISE: Job[class io.druid.indexer.DetermineHashedPartitionsJob] failed!
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
2017-07-13T14:36:37,644 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-07-13T14:35:54.402Z] status changed to [FAILED].
2017-07-13T14:36:37,647 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-07-13T14:35:54.402Z",
  "status" : "FAILED",
  "duration" : 38859
}
2017-07-13T14:36:37,659 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@70c491b8].
2017-07-13T14:36:37,659 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]
2017-07-13T14:36:37,660 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/nn0.cluster.local:8100]
2017-07-13T14:36:37,673 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@ad0bb4e].
2017-07-13T14:36:37,673 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@5546e754].
2017-07-13T14:36:37,675 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager
2017-07-13T14:36:37,679 INFO [main] org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@a323a5b{HTTP/1.1}{0.0.0.0:8100}
2017-07-13T14:36:37,681 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@1182413a{/,null,UNAVAILABLE}
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@5d71b500].
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@b428830].
2017-07-13T14:36:37,685 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@43a4a9e5].
2017-07-13T14:36:37,689 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@5b733ef7].
2017-07-13T14:36:37,690 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@c4d2c44].
2017-07-13T14:36:37,690 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-07-13T14:36:37,690 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-07-13T14:36:37,693 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15cf3a3038c008e closed
2017-07-13T14:36:37,693 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15cf3a3038c008e
2017-07-13T14:36:37,693 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@7a83ccd2].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.storage.hdfs.HdfsStorageAuthentication.stop()] on object[io.druid.storage.hdfs.HdfsStorageAuthentication@412ebe64].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@1f78d415].
2017-07-13T14:36:37,707 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@5dbbb292].
2017-07-13T14:36:37,710 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@5460edd3].
2017-07-13 14:36:37,743 pool-1-thread-1 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.hadoop.hdfs.LeaseRenewer.&amp;lt;clinit&amp;gt;(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:830)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:968)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1214)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2886)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2903)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)


&lt;/PRE&gt;&lt;P&gt;I already tried these settings I found in a gist withou any luck:&lt;/P&gt;&lt;PRE&gt;        "jobProperties": {
            "mapreduce.job.classloader": true,
            "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop."
        }


&lt;/PRE&gt;&lt;P&gt;Any recommendations?&lt;/P&gt;</description>
    <pubDate>Fri, 28 Jul 2017 15:08:45 GMT</pubDate>
    <dc:creator>sz1</dc:creator>
    <dc:date>2017-07-28T15:08:45Z</dc:date>
    <item>
      <title>Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220901#M182775</link>
      <description>&lt;P&gt;Hi all,

Now I'm facing a problem in submitting the &lt;A href="http://druid.io/docs/0.10.0/tutorials/quickstart.html" target="_blank"&gt;wikiticker&lt;/A&gt; job with Druid 0.9.2 bundled in HDP 2.6.0.3. (But there isn't any problem with standalone version of 0.10.0 in my local PC)
However, I must use the HDP version for the compatibility to Apache Ambari and the rest of my existing cluster.
This is how I submit the job.&lt;/P&gt;
&lt;DIV&gt;
	[centos@dev-server1 druid]$ curl -X 'POST' -H 'Content-Type:application/json' -d &lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/96468"&gt;@quickstart&lt;/a&gt;/wikiticker-index.json localhost:8090/druid/indexer/v1/task
{"task":"index_hadoop_
	wikiticker_2017-06-15T11:04:18.145Z"}

	

	But it appeared to be FAILED in Coordinator Console with the following error (I tried many times with a few different setting that I thought it might solve: e.g. change UNIX timezone to be my local time or add mapred.job.classloader in jobProperties as describe in this &lt;A href="http://druid.io/docs/latest/operations/other-hadoop.html" target="_blank"&gt;link&lt;/A&gt;, but to no avail):
	&lt;PRE&gt;2017-06-15T11:04:31,361 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.&amp;lt;wbr&amp;gt;ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_&amp;lt;wbr&amp;gt;hadoop_wikiticker_2017-06-&amp;lt;wbr&amp;gt;15T11:04:18.145Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.&amp;lt;wbr&amp;gt;InvocationTargetException
	at com.google.common.base.&amp;lt;wbr&amp;gt;Throwables.propagate(&amp;lt;wbr&amp;gt;Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopTask.&amp;lt;wbr&amp;gt;invokeForeignLoader(&amp;lt;wbr&amp;gt;HadoopTask.java:204) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopIndexTask.run(&amp;lt;wbr&amp;gt;HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	at io.druid.indexing.overlord.&amp;lt;wbr&amp;gt;ThreadPoolTaskRunner$&amp;lt;wbr&amp;gt;ThreadPoolTaskRunnerCallable.&amp;lt;wbr&amp;gt;call(ThreadPoolTaskRunner.&amp;lt;wbr&amp;gt;java:436) [druid-indexing-service-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.overlord.&amp;lt;wbr&amp;gt;ThreadPoolTaskRunner$&amp;lt;wbr&amp;gt;ThreadPoolTaskRunnerCallable.&amp;lt;wbr&amp;gt;call(ThreadPoolTaskRunner.&amp;lt;wbr&amp;gt;java:408) [druid-indexing-service-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at java.util.concurrent.&amp;lt;wbr&amp;gt;FutureTask.run(FutureTask.&amp;lt;wbr&amp;gt;java:266) [?:1.8.0_131]
	at java.util.concurrent.&amp;lt;wbr&amp;gt;ThreadPoolExecutor.runWorker(&amp;lt;wbr&amp;gt;ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.&amp;lt;wbr&amp;gt;ThreadPoolExecutor$Worker.run(&amp;lt;wbr&amp;gt;ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.&amp;lt;wbr&amp;gt;java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.&amp;lt;wbr&amp;gt;InvocationTargetException
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.java:&amp;lt;wbr&amp;gt;62) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.&amp;lt;wbr&amp;gt;invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopTask.&amp;lt;wbr&amp;gt;invokeForeignLoader(&amp;lt;wbr&amp;gt;HadoopTask.java:201) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	... 7 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.&amp;lt;wbr&amp;gt;InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-&amp;lt;wbr&amp;gt;data-center.internal:8020/&amp;lt;wbr&amp;gt;user/druid/quickstart/&amp;lt;wbr&amp;gt;wikiticker-2015-09-12-sampled.&amp;lt;wbr&amp;gt;json
	at com.google.common.base.&amp;lt;wbr&amp;gt;Throwables.propagate(&amp;lt;wbr&amp;gt;Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexer.&amp;lt;wbr&amp;gt;DetermineHashedPartitionsJob.&amp;lt;wbr&amp;gt;run(&amp;lt;wbr&amp;gt;DetermineHashedPartitionsJob.&amp;lt;wbr&amp;gt;java:208) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.JobHelper.&amp;lt;wbr&amp;gt;runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.&amp;lt;wbr&amp;gt;HadoopDruidDetermineConfigurat&amp;lt;wbr&amp;gt;ionJob.run(&amp;lt;wbr&amp;gt;HadoopDruidDetermineConfigurat&amp;lt;wbr&amp;gt;ionJob.java:91) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopIndexTask$&amp;lt;wbr&amp;gt;HadoopDetermineConfigInnerProc&amp;lt;wbr&amp;gt;essing.runTask(&amp;lt;wbr&amp;gt;HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.java:&amp;lt;wbr&amp;gt;62) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.&amp;lt;wbr&amp;gt;invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopTask.&amp;lt;wbr&amp;gt;invokeForeignLoader(&amp;lt;wbr&amp;gt;HadoopTask.java:201) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	... 7 more
Caused by: org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.&amp;lt;wbr&amp;gt;InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-&amp;lt;wbr&amp;gt;data-center.internal:8020/&amp;lt;wbr&amp;gt;user/druid/quickstart/&amp;lt;wbr&amp;gt;wikiticker-2015-09-12-sampled.&amp;lt;wbr&amp;gt;json
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.FileInputFormat.&amp;lt;wbr&amp;gt;singleThreadedListStatus(&amp;lt;wbr&amp;gt;FileInputFormat.java:323) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.FileInputFormat.&amp;lt;wbr&amp;gt;listStatus(FileInputFormat.&amp;lt;wbr&amp;gt;java:265) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.FileInputFormat.&amp;lt;wbr&amp;gt;getSplits(FileInputFormat.&amp;lt;wbr&amp;gt;java:387) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;lib.input.&amp;lt;wbr&amp;gt;DelegatingInputFormat.&amp;lt;wbr&amp;gt;getSplits(&amp;lt;wbr&amp;gt;DelegatingInputFormat.java:&amp;lt;wbr&amp;gt;115) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;JobSubmitter.writeNewSplits(&amp;lt;wbr&amp;gt;JobSubmitter.java:301) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;JobSubmitter.writeSplits(&amp;lt;wbr&amp;gt;JobSubmitter.java:318) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;JobSubmitter.&amp;lt;wbr&amp;gt;submitJobInternal(&amp;lt;wbr&amp;gt;JobSubmitter.java:196) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;Job$10.run(Job.java:1290) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;Job$10.run(Job.java:1287) ~[?:?]
	at java.security.&amp;lt;wbr&amp;gt;AccessController.doPrivileged(&amp;lt;wbr&amp;gt;Native Method) ~[?:1.8.0_131]
	at javax.security.auth.Subject.&amp;lt;wbr&amp;gt;doAs(Subject.java:422) ~[?:1.8.0_131]
	at org.apache.hadoop.security.&amp;lt;wbr&amp;gt;UserGroupInformation.doAs(&amp;lt;wbr&amp;gt;UserGroupInformation.java:&amp;lt;wbr&amp;gt;1866) ~[?:?]
	at org.apache.hadoop.mapreduce.&amp;lt;wbr&amp;gt;Job.submit(Job.java:1287) ~[?:?]
	at io.druid.indexer.&amp;lt;wbr&amp;gt;DetermineHashedPartitionsJob.&amp;lt;wbr&amp;gt;run(&amp;lt;wbr&amp;gt;DetermineHashedPartitionsJob.&amp;lt;wbr&amp;gt;java:116) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.JobHelper.&amp;lt;wbr&amp;gt;runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.&amp;lt;wbr&amp;gt;HadoopDruidDetermineConfigurat&amp;lt;wbr&amp;gt;ionJob.run(&amp;lt;wbr&amp;gt;HadoopDruidDetermineConfigurat&amp;lt;wbr&amp;gt;ionJob.java:91) ~[druid-indexing-hadoop-0.9.2.&amp;lt;wbr&amp;gt;2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopIndexTask$&amp;lt;wbr&amp;gt;HadoopDetermineConfigInnerProc&amp;lt;wbr&amp;gt;essing.runTask(&amp;lt;wbr&amp;gt;HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;NativeMethodAccessorImpl.java:&amp;lt;wbr&amp;gt;62) ~[?:1.8.0_131]
	at sun.reflect.&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;invoke(&amp;lt;wbr&amp;gt;DelegatingMethodAccessorImpl.&amp;lt;wbr&amp;gt;java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.&amp;lt;wbr&amp;gt;invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.&amp;lt;wbr&amp;gt;HadoopTask.&amp;lt;wbr&amp;gt;invokeForeignLoader(&amp;lt;wbr&amp;gt;HadoopTask.java:201) ~[druid-indexing-service-0.9.&amp;lt;wbr&amp;gt;2.2.6.0.3-8.jar:0.9.2.2.6.0.3-&amp;lt;wbr&amp;gt;8]
	... 7 more
2017-06-15T11:04:31,375 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.&amp;lt;wbr&amp;gt;TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-&amp;lt;wbr&amp;gt;06-15T11:04:18.145Z] status changed to [FAILED].
2017-06-15T11:04:31,378 INFO [task-runner-0-priority-0] io.druid.indexing.worker.&amp;lt;wbr&amp;gt;executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-&amp;lt;wbr&amp;gt;06-15T11:04:18.145Z",
  "status" : "FAILED",
  "duration" : 6650
}
	&lt;/PRE&gt;&lt;/DIV&gt;&lt;P&gt;
	Please find attached &lt;A href="https://community.cloudera.com/legacyfs/online/attachments/16403-log.txt" target="_blank"&gt;log.txt&lt;/A&gt; for more information

	Thank you very much.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 15:45:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220901#M182775</guid>
      <dc:creator>kliwp</dc:creator>
      <dc:date>2022-09-16T15:45:04Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220902#M182776</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/18870/kliwp.html" nodeid="18870"&gt;@Kamolphan Liwprasert&lt;/A&gt;&lt;/P&gt;&lt;P&gt;There is the following line in your logfile:&lt;/P&gt;&lt;PRE&gt;Caused by: java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://dev-server1.c.sertis-data-center.internal:8020/user/druid/quickstart/wikiticker-2015-09-12-sampled.json&lt;/PRE&gt;&lt;P&gt;Make sure you have your sample file there.&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Andres&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2017 14:21:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220902#M182776</guid>
      <dc:creator>andres_koitmae</dc:creator>
      <dc:date>2017-06-16T14:21:10Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220903#M182777</link>
      <description>&lt;P&gt;Hi Andres,

I just put the file in HDFS, but the same error still occurs. Maybe I've done something wrong during the steps. I'm still working around.&lt;/P&gt;&lt;P&gt;Thank you very much for your kindly reply.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jun 2017 16:26:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220903#M182777</guid>
      <dc:creator>kliwp</dc:creator>
      <dc:date>2017-06-16T16:26:42Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220904#M182778</link>
      <description>&lt;P&gt;make sure the file has the correct read perm for use druid.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Jun 2017 22:40:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220904#M182778</guid>
      <dc:creator>sbouguerra</dc:creator>
      <dc:date>2017-06-21T22:40:31Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220905#M182779</link>
      <description>&lt;P&gt;Hi, Slim. Thank you for your replay. I made the correct file permission as I use druid user to put file there in HDFS. It turned out to be another problem ,apart from putting file to HDFS, about YARN queue manager which I have problem with HTTPS set up.

As I tested in test environment, I rolled back to the snapshot before I got problem with HTTPS. I will tried druid again someday.

Thank you Andres and Slim!&lt;/P&gt;</description>
      <pubDate>Thu, 22 Jun 2017 14:38:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220905#M182779</guid>
      <dc:creator>kliwp</dc:creator>
      <dc:date>2017-06-22T14:38:54Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220906#M182780</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm also having issues running the wiki-demo on HDP 2.6:&lt;/P&gt;&lt;PRE&gt;2017-07-13T14:36:04,480 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Running job: job_1498736541391_0023
2017-07-13T14:36:37,574 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 running in uber mode : false
2017-07-13T14:36:37,576 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2017-07-13T14:36:37,590 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1498736541391_0023 failed with state FAILED due to: Application application_1498736541391_0023 failed 2 times due to AM Container for appattempt_1498736541391_0023_000002 exited with  exitCode: 1
For more detailed output, check the application tracking page: &lt;A href="http://nn0.cluster.local:8088/cluster/app/application_1498736541391_0023" target="_blank"&gt;http://nn0.cluster.local:8088/cluster/app/application_1498736541391_0023&lt;/A&gt; Then click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e104_1498736541391_0023_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:944)
	at org.apache.hadoop.util.Shell.run(Shell.java:848)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1142)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:237)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)




Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2017-07-13T14:36:37,612 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 0
2017-07-13T14:36:37,614 ERROR [task-runner-0-priority-0] io.druid.indexer.DetermineHashedPartitionsJob - Job failed: job_1498736541391_0023
2017-07-13T14:36:37,614 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[/tmp/druid-indexing/wikiticker/2017-07-13T143554.401Z_10bdd9105fc14992bd9462b9bc50f992]
2017-07-13T14:36:37,638 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikiticker_2017-07-13T14:35:54.402Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
Caused by: com.metamx.common.ISE: Job[class io.druid.indexer.DetermineHashedPartitionsJob] failed!
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) ~[druid-indexing-hadoop-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
	... 7 more
2017-07-13T14:36:37,644 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-07-13T14:35:54.402Z] status changed to [FAILED].
2017-07-13T14:36:37,647 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-07-13T14:35:54.402Z",
  "status" : "FAILED",
  "duration" : 38859
}
2017-07-13T14:36:37,659 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@70c491b8].
2017-07-13T14:36:37,659 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]
2017-07-13T14:36:37,660 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/nn0.cluster.local:8100]
2017-07-13T14:36:37,673 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@ad0bb4e].
2017-07-13T14:36:37,673 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/nn0.cluster.local:8100]
2017-07-13T14:36:37,675 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@5546e754].
2017-07-13T14:36:37,675 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager
2017-07-13T14:36:37,679 INFO [main] org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@a323a5b{HTTP/1.1}{0.0.0.0:8100}
2017-07-13T14:36:37,681 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@1182413a{/,null,UNAVAILABLE}
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@5d71b500].
2017-07-13T14:36:37,684 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@b428830].
2017-07-13T14:36:37,685 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@43a4a9e5].
2017-07-13T14:36:37,689 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@5b733ef7].
2017-07-13T14:36:37,690 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@c4d2c44].
2017-07-13T14:36:37,690 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-07-13T14:36:37,690 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-07-13T14:36:37,693 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15cf3a3038c008e closed
2017-07-13T14:36:37,693 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15cf3a3038c008e
2017-07-13T14:36:37,693 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@7a83ccd2].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.storage.hdfs.HdfsStorageAuthentication.stop()] on object[io.druid.storage.hdfs.HdfsStorageAuthentication@412ebe64].
2017-07-13T14:36:37,706 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@1f78d415].
2017-07-13T14:36:37,707 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@5dbbb292].
2017-07-13T14:36:37,710 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@5460edd3].
2017-07-13 14:36:37,743 pool-1-thread-1 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.hadoop.hdfs.LeaseRenewer.&amp;lt;clinit&amp;gt;(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:830)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:968)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1214)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2886)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2903)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)


&lt;/PRE&gt;&lt;P&gt;I already tried these settings I found in a gist withou any luck:&lt;/P&gt;&lt;PRE&gt;        "jobProperties": {
            "mapreduce.job.classloader": true,
            "mapreduce.job.classloader.system.classes": "-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop."
        }


&lt;/PRE&gt;&lt;P&gt;Any recommendations?&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jul 2017 15:08:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220906#M182780</guid>
      <dc:creator>sz1</dc:creator>
      <dc:date>2017-07-28T15:08:45Z</dc:date>
    </item>
    <item>
      <title>Re: Cannot run Druid quickstart job on HDP-2.6.0.3</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220907#M182781</link>
      <description>&lt;P&gt;This seems to be unrelated, can you please start new thread and make sure to attach the actual logs of the job and the task spec, Thanks.&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jul 2017 22:08:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Cannot-run-Druid-quickstart-job-on-HDP-2-6-0-3/m-p/220907#M182781</guid>
      <dc:creator>sbouguerra</dc:creator>
      <dc:date>2017-07-28T22:08:04Z</dc:date>
    </item>
  </channel>
</rss>

