Member since
02-12-2016
22
Posts
17
Kudos Received
0
Solutions
06-07-2017
07:20 AM
While running below query in HUE i am getting error. Any suggestion on this. Command : msck repair table import_********; Error : Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Hue
02-24-2016
09:26 AM
It seems like, the replication factor is 1 my case. How to get it recovered from DR cluster. ?
... View more
02-23-2016
09:12 AM
3 Kudos
In my HDFS status summary, I see the following messages about missing and under-replicated blocks: 2,114 missing blocks in the cluster. 5,114,551 total blocks in the cluster. Percentage missing blocks: 0.04%. Critical threshold: any. On executing the command : hdfs fsck -list-corruptfileblocks I got following output : The filesystem under path '/' has 2114 CORRUPT files What is the best way to fix these corrupt files and also fix the underreplicated block problem?
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
02-16-2016
10:38 PM
Yes. it is prod environment.
... View more
02-16-2016
09:53 PM
@Neeraj Sabharwal yes the system is running out of space. Can you please suggest me a better way rather than creating a soft link.
... View more
02-16-2016
01:13 AM
@Neeraj Sabharwal Can you please help us out here by providing an example if this lies in your scope.
... View more
02-16-2016
01:09 AM
@Neeraj Sabharwal Yes I was thinking to move the data, deletion is out of my boundaries as per my role.
... View more
02-15-2016
11:55 PM
1 Kudo
@Neeraj Sabharwal got the logs.seems like It is related to memory issue. Unfortunately i don't have permissions to delete it.
Can I create a soft link to move the data around as a work around.
Can you please assist if I create a soft link for any lib, will it move the present data or the upcoming data or both ?
... View more
02-15-2016
11:26 PM
1 Kudo
@Neeraj Sabharwal Thanks for your assistance. I found the above details from the log at the instant when warning was generated. after that the service went down. Apart from this noting more was there in the log. Also the memory uses is just below the critical threshold capacity. We set 90 % for critical limit and right now it is 89 .4 %. So i mentioned is it related to memory issue.
... View more
02-15-2016
11:02 PM
3 Kudos
There are certain times where we need to change the priority of
the hadoop jobs. Due to some business criticality, we want some jobs to have
high priority and some jobs to have low priority. So, that the important jobs
are completed early. If
Hadoop cluster is using the Capacity Scheduler with priorities enabled for
queues, then we can set priority of our hadoop jobs. This article explain to set the priority of hadoop jobs and explained how to change the priority of
Hadoop Jobs. 1)Set the priority in Map Reduce Program:
In Map/Reduce program we can set the job priority using following way. Configuration conf = new Configuration();
// set the priority to VERY_HIGH
conf.set("mapred.job.priority", JobPriority. VERY_HIGH .toString()); Allowed
priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW 2)Set the priority in Pig Program:
We can set priority of Pig job using below property, This property is used to
set the job priority is Pig Programming : job.priority For
example: grunt> SET job.priority 'high' If you
are trying to set priority in Pig Script then write this property before load
statement
For
example: SET job.priority 'high';
A = LOAD '/user/hdfs/myfile.txt' USING PigStorage() AS (ID, Name); Acceptable
values to set the priority is:very_low,
low, normal, high, very_high Please
note these values are case insensitive. 3)Set the priority for Hive Query:
In Hive we can set the job priority using below property. SET mapred.job.priority=VERY_HIGH; You need
to set this value before your query.
Allowed priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW Themapred.job.priorityis deprecated.
The new property ismapreduce.job.priority We can
also change the priority of the running hadoop jobs. Usage: hadoop job -set-priority job-id priority
For
example: hadoop job -set-priority job_20120111540_54485 VERY_HIGH Allowed
priority values are:VERY_HIGH,
HIGH, NORMAL, LOW, VERY_LOW
... View more
- Find more articles tagged with:
- hadoop
- How-ToTutorial
- jobs
- Security
Labels:
02-15-2016
10:50 PM
1 Kudo
Error starting NodeManager java.lang.NullPointerException at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverContainer(ContainerManagerImpl.java:289) at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:252) at
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:235) at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:250) at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445) at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
... View more
Labels:
- Labels:
-
Apache YARN
02-14-2016
09:32 AM
1 Kudo
Apache Kafka is a high-throughput distributed messaging system developed
by LinkedIn. Kafka is a distributed, partitioned commit log service,
that provides the functionality of a messaging system with a unique design.
It is written in Scala and does not follow JMS (Java Message
Service) standards. The best way to learn about Kafka is read the original design
page http://kafka.apache.org/
.That will give you an overview of the motivation behind the design choices and
what makes Kafka efficient. It is also a very engaging read if you are
interested in systems. In terms of adoption, Kafka is currently used in production at
LinkedIn, Twitter, Tumblr, Square and a number of different companies. You can
read about the uses cases that those companies found for Kafka here : https://cwiki.apache.org/confluence/display/KAFKA/Powered+By
... View more
02-13-2016
12:02 PM
2 Kudos
Since Hadoop gives precedence to the delegation tokens, we must make sure we login as a different user, get new tokens and replace the old ones in the current user's credentials cache to avoid not being able to get new ones. This may help.
... View more
02-13-2016
11:53 AM
@Roberto Sancho , NIS seems to be workaround, but i didn't find it secure. You can get about it in : "http://aput.net/~jheiss/krbldap/howto.html".
But i will suggest to go with Kerberos.
... View more
02-13-2016
10:16 AM
1 Kudo
e know Hadoop is used in clustered environment where we have clusters, each cluster will have multiple racks, each rack will have multiple datanodes. So to make HDFS fault tolerant in your cluster you need to consider following failures- DataNode failure Rack failure Chances of Cluster failure is fairly low so let not think about it. In the above cases you need to make sure that - If one DataNode fails, you can get the same data from another DataNode If the entire Rack fails, you can get the same data from another Rack So thats why I think default replication factor is set to 3, so that not 2 replica goes to same DataNode and at-least 1 replica goes to different Rack to fulfill the above mentioned Fault-Tolerant criteria. Hope this will help.
... View more