Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

The application might not be running yet or there is no Node Manager or Container available.

The application might not be running yet or there is no Node Manager or Container available.

Explorer

I have installed Hadoop through Cloudera Manager on Single Node.

I designed my workflow in in oozie to run Hive queries through it.But when I submit it gets stuck at the first action of workflow that is a Hive query to add a partition intoa  table.Progress shown is 50% and state is running.When I go to logs following is the message:

The application might not be running yet or there is no Node Manager or Container available.

 

I am able to add partition through Hive editor...Please guide

8 REPLIES 8

Re: The application might not be running yet or there is no Node Manager or Container available.

Cloudera Employee

Sounds like there is a chance there are not enough resources available in yarn to run the job. If you run the following command on the CDH host what is the output:

 

yarn node -list

 

If this returns a host, eg:

 

[vagrant@standalone ~]$ yarn node -list
16/11/29 12:50:32 INFO client.RMProxy: Connecting to ResourceManager at standalone/192.168.33.6:8032
Total Nodes:1
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
standalone:42578 RUNNING standalone:8042

Can you take the node-id and check what resources it has:

 

$ yarn node -status standalone:42578

Re: The application might not be running yet or there is no Node Manager or Container available.

Explorer

yarn node -list

O/p:-

 

 

16/11/30 00:15:46 INFO client.RMProxy: Connecting to ResourceManager at taha99/172.*.*.*:8032
Total Nodes:1
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
taha99:8041 RUNNING taha99:8042

=================================================================

 

yarn node -status standalone:42578
O/P:-

 

16/11/30 00:17:43 INFO client.RMProxy: Connecting to ResourceManager at taha99/172.*.*.*:8032
Could not find the node report for node id : standalone:42578

 

Re: The application might not be running yet or there is no Node Manager or Container available.

Cloudera Employee
Sorry, I should have said, can you substitute your node_id for the one in my example, ie:

yarn node -status taha99:8041

Re: The application might not be running yet or there is no Node Manager or Container available.

Explorer

Sorry that was silly mistake.I should have understood that.

 

yarn node -status taha99:8041
 
o/p:-
16/11/30 17:26:33 INFO client.RMProxy: Connecting to ResourceManager at taha99/172.*.*.*:8032
Node Report :
Node-Id : taha99:8041
Rack : /default
Node-State : RUNNING
Node-Http-Address : taha99:8042
Last-Health-Update : Wed 30/Nov/16 05:26:01:543IST
Health-Report :
Containers : 0
Memory-Used : 0MB
Memory-Capacity : 1024MB
CPU-Used : 0 vcores
CPU-Capacity : 2 vcores
Node-Labels :

 

Re: The application might not be running yet or there is no Node Manager or Container available.

Cloudera Employee

Hi,

 

So the memory capacity of your single node is set to 1GB. When you run an oozie job, it always needs 2 containers - one for oozie and another to run the jobs, so it is likely you don't have enough memory allocated to have the job run. We will need to check some of your config settings.

 

In /etc/hadoop/conf/mapred-site.xml, what values are set for:

 

mapreduce.map.memory.mb

mapreduce.reduce.memory.mb

yarn.app.mapreduce.am.resource.mb

mapreduce.map.java.opts

mapreduce.reduce.java.opts

yarn.app.mapreduce.am.command-opts

 

What memory setting have you for the VM you are running? I am wondering if we can push up the yarn limits a little to let more containers run.

Re: The application might not be running yet or there is no Node Manager or Container available.

Explorer

Followin is my /etc/hadoop/conf/mapred-site.xml

 

<property>
<name>mapreduce.map.memory.mb</name>
<value>0</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>0</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>1024</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Djava.net.preferIPv4Stack=true</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Djava.net.preferIPv4Stack=true</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Djava.net.preferIPv4Stack=true -Xmx825955249</value>
</property>

 

Highlighted

Re: The application might not be running yet or there is no Node Manager or Container available.

Explorer

I am running it on cent os

Following is memory information

cat /proc/meminfo


MemTotal: 7994408 kB
MemFree: 143104 kB
Buffers: 145996 kB
Cached: 1352220 kB
SwapCached: 36188 kB
Active: 5636696 kB
Inactive: 1819780 kB
Active(anon): 4885676 kB
Inactive(anon): 1100964 kB
Active(file): 751020 kB
Inactive(file): 718816 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 16383996 kB
SwapFree: 15438660 kB
Dirty: 1664 kB
Writeback: 0 kB
AnonPages: 5931276 kB
Mapped: 129448 kB
Shmem: 28380 kB
Slab: 238896 kB
SReclaimable: 190864 kB
SUnreclaim: 48032 kB
KernelStack: 26464 kB
PageTables: 39800 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 20381200 kB
Committed_AS: 12252892 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 345336 kB
VmallocChunk: 34359328604 kB
HardwareCorrupted: 0 kB
AnonHugePages: 5263360 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 9856 kB
DirectMap2M: 2021376 kB
DirectMap1G: 6291456 kB

Re: The application might not be running yet or there is no Node Manager or Container available.

Cloudera Employee

You could try adding the following to the bottom of the yarn-site.xml:

 

  <property>
    <description>Minimum allocation unit.</description>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>256</value>
  </property>

Also, set the nodemanager memory a bit higher and the vcores a bit higher:

 

yarn.nodemanager.resource.memory-mb to 1536

yarn.nodemanager.resource.cpu-vcores to 4

 

And then add the following to the end of the mapred-site.xml and see if it gives better results:

 

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>256</value>
  </property>

  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>256</value>
  </property>

  <property>
    <description>Application master allocation</description>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>256</value>
  </property>

  <property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx204m</value>
  </property>

  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx204m</value>
  </property>

  <property>
    <description>Application Master JVM opts</description>
    <name>yarn.app.mapreduce.am.command-opts</name>
    <value>-Xmx204m</value>
  </property>

  <property>
    <name>mapreduce.task.io.sort.mb</name>
    <value>50</value>
  </property>

Note that it would be a good idea to make a copy of the original mapred-site and yarn-site before making these changes. After changing the settings reboot the Quickstart VM so the settings take effect.