Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Druid jobs stuck

Druid jobs stuck

Explorer

I am trying to ingest data into druid and am following the quickstart tutorial. However my jobs are getting stuck at the following prompt when viewed on druid console:

2018-12-18T08:35:16,017 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Running job: job_1545031238968_0020
2018-12-18T08:35:29,379 INFO [task-runner-0-priority-0] org.apache.hadoop.ipc.Client - Retrying connect to server: data1.hdplab.com/172.31.7.167:41326. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
2018-12-18T08:35:30,380 INFO [task-runner-0-priority-0] org.apache.hadoop.ipc.Client - Retrying connect to server: data1.hdplab.com/172.31.7.167:41326. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
2018-12-18T08:35:31,381 INFO [task-runner-0-priority-0] org.apache.hadoop.ipc.Client - Retrying connect to server: data1.hdplab.com/172.31.7.167:41326. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
2018-12-18T08:35:36,580 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1545031238968_0020 running in uber mode : false
2018-12-18T08:35:36,581 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2018-12-18T08:35:47,317 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 100% reduce 0%

The job is not proceeding forward at all and it seems like a memory issue because my yarn memory has touched 100%.

My yarn has 15Gb of memory but 7.5GB is reserved by the hive Daemon leaving 7GB for other jobs. Anycase, the data being ingested is less than even 10 MB. So is there an issue with some configuration because of which druid wants extra resources?

I was initially struggling with out of memory error (https://community.hortonworks.com/questions/230775/druid-doesnt-work.html) - to solve which I increased the -XX:MaxDirectMemorySize in druid middlemanger settings to about 1GB which made that error go away.

Don't have an account?
Coming from Hortonworks? Activate your account here