Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

org.apache.hadoop.hive.ql.metadata.HiveException dissapears when I add LIMIT clause to query

Solved Go to solution

org.apache.hadoop.hive.ql.metadata.HiveException dissapears when I add LIMIT clause to query

New Contributor

I have a query that inserts records from one table to another, 1.5 million records approx.

5900-query1.jpg

When I run the query most of the map tasks fails and the job aborts displaying several exceptions.

5899-hive-ex.jpg

There is no problem at all in the data being inserted or in the table schema, as the failed rows can be successfully inserted in an isolated query. Datanodes and yarn have plenty of space left, connectivity is not a problem as the cluster is hosted in aws ec2 and the pcs are in the same virtual private cloud.

But here is the strange thing… If I add a LIMIT clause to the original query the job executes properly! The limit value is big enough to include all the records, so in reality there is no actual limit.

5912-hive-succ.jpg

Cluster Specs: The cluster is small for testing purposes, composed by 3 machines, the ambari-server and 2 datanodes/nodemanagers 8GB RAM each, YARN memory 12.5GB. Remaining HDFS disk 40GB.

Thanks in advance for your time,

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: org.apache.hadoop.hive.ql.metadata.HiveException dissapears when I add LIMIT clause to query

Rising Star

Looks like your datanodes are dying from too many open files - check the nofiles setting for the "hdfs" user in /etc/security/limits.d/ If you want to bypass that particular problem by changing the query plan, try with set hive.optimize.sort.dynamic.partition=true;

1 REPLY 1
Highlighted

Re: org.apache.hadoop.hive.ql.metadata.HiveException dissapears when I add LIMIT clause to query

Rising Star

Looks like your datanodes are dying from too many open files - check the nofiles setting for the "hdfs" user in /etc/security/limits.d/ If you want to bypass that particular problem by changing the query plan, try with set hive.optimize.sort.dynamic.partition=true;