Member since
02-12-2016
22
Posts
17
Kudos Received
0
Solutions
02-24-2016
09:26 AM
It seems like, the replication factor is 1 my case. How to get it recovered from DR cluster. ?
... View more
02-23-2016
09:12 AM
3 Kudos
In my HDFS status summary, I see the following messages about missing and under-replicated blocks: 2,114 missing blocks in the cluster. 5,114,551 total blocks in the cluster. Percentage missing blocks: 0.04%. Critical threshold: any. On executing the command : hdfs fsck -list-corruptfileblocks I got following output : The filesystem under path '/' has 2114 CORRUPT files What is the best way to fix these corrupt files and also fix the underreplicated block problem?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-16-2016
10:38 PM
Yes. it is prod environment.
... View more
02-16-2016
09:53 PM
@Neeraj Sabharwal yes the system is running out of space. Can you please suggest me a better way rather than creating a soft link.
... View more
02-16-2016
01:13 AM
@Neeraj Sabharwal Can you please help us out here by providing an example if this lies in your scope.
... View more
02-16-2016
01:09 AM
@Neeraj Sabharwal Yes I was thinking to move the data, deletion is out of my boundaries as per my role.
... View more
02-15-2016
11:55 PM
1 Kudo
@Neeraj Sabharwal got the logs.seems like It is related to memory issue. Unfortunately i don't have permissions to delete it.
Can I create a soft link to move the data around as a work around.
Can you please assist if I create a soft link for any lib, will it move the present data or the upcoming data or both ?
... View more
02-15-2016
11:26 PM
1 Kudo
@Neeraj Sabharwal Thanks for your assistance. I found the above details from the log at the instant when warning was generated. after that the service went down. Apart from this noting more was there in the log. Also the memory uses is just below the critical threshold capacity. We set 90 % for critical limit and right now it is 89 .4 %. So i mentioned is it related to memory issue.
... View more
02-15-2016
11:02 PM
3 Kudos
There are certain times where we need to change the priority of
the hadoop jobs. Due to some business criticality, we want some jobs to have
high priority and some jobs to have low priority. So, that the important jobs
are completed early. If
Hadoop cluster is using the Capacity Scheduler with priorities enabled for
queues, then we can set priority of our hadoop jobs. This article explain to set the priority of hadoop jobs and explained how to change the priority of
Hadoop Jobs. 1)Set the priority in Map Reduce Program:
In Map/Reduce program we can set the job priority using following way. Configuration conf = new Configuration();
// set the priority to VERY_HIGH
conf.set("mapred.job.priority", JobPriority. VERY_HIGH .toString()); Allowed
priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW 2)Set the priority in Pig Program:
We can set priority of Pig job using below property, This property is used to
set the job priority is Pig Programming : job.priority For
example: grunt> SET job.priority 'high' If you
are trying to set priority in Pig Script then write this property before load
statement
For
example: SET job.priority 'high';
A = LOAD '/user/hdfs/myfile.txt' USING PigStorage() AS (ID, Name); Acceptable
values to set the priority is:very_low,
low, normal, high, very_high Please
note these values are case insensitive. 3)Set the priority for Hive Query:
In Hive we can set the job priority using below property. SET mapred.job.priority=VERY_HIGH; You need
to set this value before your query.
Allowed priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW Themapred.job.priorityis deprecated.
The new property ismapreduce.job.priority We can
also change the priority of the running hadoop jobs. Usage: hadoop job -set-priority job-id priority
For
example: hadoop job -set-priority job_20120111540_54485 VERY_HIGH Allowed
priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW
... View more
Labels:
02-15-2016
10:50 PM
1 Kudo
Error starting NodeManager java.lang.NullPointerException at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverContainer(ContainerManagerImpl.java:289) at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:252) at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:235) at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:250) at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
... View more
Labels:
- Labels:
-
Apache YARN