Member since
05-19-2016
216
Posts
20
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4188 | 05-29-2018 11:56 PM | |
7020 | 07-06-2017 02:50 AM | |
3765 | 10-09-2016 12:51 AM | |
3530 | 05-13-2016 04:17 AM |
07-05-2017
11:35 PM
Pause Duration Suppress... Average time spent paused was 46.4 second(s) (77.32%) per minute over the previous 5 minute(s). Critical threshold: 60.00% I have suddenly started getting this issue. It never used to happen before. Have not even changed any configurations. What could be the possible reasons and how to inspect and resolve this?
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
03-17-2017
03:06 AM
I have INSERT OVERWRITE queries in HQL file which sometimes do not get the required locks because an end user could be querying data in the same table. The scheduled query just fails in such cases breaking the workflow. Is there a way to fix this? if I just UNLOCK TABLE everytime, it results in an error that lock does not exist. Can I unlock it if the lock exists so that my scheduled query is always successful? Heart beat
Heart beat
Heart beat
Heart beat
6712698 [main] ERROR org.apache.hadoop.hive.ql.Driver - FAILED: Error in acquiring locks: Locks on the underlying objects cannot be acquired. retry after some time
org.apache.hadoop.hive.ql.lockmgr.LockException: Locks on the underlying objects cannot be acquired. retry after some time
at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.acquireLocks(DummyTxnManager.java:164)
at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:1025)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1301)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:218)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:170)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:381)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:316)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:414)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:430)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:724)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:691)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:325)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:302)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:49)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
... View more
Labels:
- Labels:
-
Apache Hive
11-05-2016
11:05 AM
Here's a step by step guide to troubleshoot this error: http://www.yourtechchick.com/hadoop/failed-receive-heartbeat-agent-cloudera-hadoop/
... View more
11-05-2016
11:04 AM
Here's a step by step guide to troubleshoot this error: http://www.yourtechchick.com/hadoop/failed-receive-heartbeat-agent-cloudera-hadoop/
... View more
11-05-2016
11:02 AM
Here's a step by step guide to troubleshoot this error: http://www.yourtechchick.com/hadoop/failed-receive-heartbeat-agent-cloudera-hadoop/
... View more
11-03-2016
06:02 PM
@jss : Yes, hostname -f returns the FQDN as expected but in AWS it is the private DNS which is not pingable from outside network. Only public DNS is pingable from outside netowrk and public DNS is not the FQDN. What do you suggest in this case?
... View more
11-03-2016
04:02 PM
Also, to add a new host I should use Public DNS in that case (since my existing host is not hosted with AWS and so ofcourse isn't inside VPC from amazon as my new host it. But as I see, everywhere FQDN is added in the hosts file. or is it like I add (FQDN/private DNS )with public IP in the hosts file . Also, while searching for the host that needs to be added, it has to be Public DNS right? It wasnot able to find it if I used FQDN-which is private DNS in AWS.
... View more
11-03-2016
03:59 PM
@jss: Thank you for your response. I get your point. But how does it matter if the instance is inside VPC or not. If my existing node is hosted remotely, how would it be able to interact with it using the private IP?It has a VPC ID so I suppose it is inside VPC and for as long as I am not restarting the instance, atleast until then it should be good to go. No? But my primary concern here is how does adding private IP address help at all when the server /existing node is not in the same network as the other node.
... View more
11-03-2016
03:18 PM
I have a single node cluster so far.The new host I am going to be adding is located in a different data-center than the existing host. I see:
Using a text editor, open the hosts file on every host in your cluster. For example: vi /etc/hosts Add a line for each host in your cluster. The line should consist of the IP address and the FQDN. For example: 1.2.3.4 <fully.qualified.domain.name> 1.2.3.4 here is a public IP or a private IP? The new host I am trying to add is hosted with AWS . Existing node ona dedicated server hosted with some other company. What should I add in /etc/hosts of the 1. existing node 2. new node AWS node (new one) has -> a public DNS, a private DNS, a public IP , a private IP. (Private DNS is the hostname). In all your examples, you have added private IP and hostname (private DNS). But how does adding private IP help if it's not a part of network? Please help!
... View more
11-03-2016
08:43 AM
So far I have a single node cluster (cluster-1 with host-a) . Now, I am trying to add a new host which is an AWS EC2 instance. I came across this article: http://hortonworks.com/blog/deploying-hadoop-cluster-amazon-ec2-hortonworks/ It says that Remember the list of private DNS names that you had copied down to a text file. We will pull out the list and paste it in the Target host input box. We will also upload the private key that we have been using on this page: Then we also need to open up the ports for IPs internal to the datacenter: The IP of the server I am trying to add as a new host (let's call it host-B) is not internal to the data center where my existing single node cluster is hosted. What all I do I need to do for adding this new host that is not internal to the data-center where my existing single node cluster is hosted? Amazon server gives: a hostname a public DNS a private DNS a Public IP a private IP What exactly do I need to add in mt /etc/hosts of the existing host (host-A) in the cluster for it to be able to access the new host (host-B) that is hosted with AWS. Also, what needs to go in /etc/hosts of the new host (host-b) Please suggest! My problem is that even though the host was added, it is not in the list of live hosts. Also, heartbeat is available
... View more
Labels:
- Labels:
-
Apache Hadoop