Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive Query does not run

avatar
Rising Star

When I want to run queries such count or insert , I always get

INFO : number of splits:1 INFO : Submitting tokens for job: job_1481788223653_0004 INFO : The url to track the job: http://ibm-biginsight.com:8088/proxy/application_1481788223653_0004/

INFO : Starting Job = job_1481788223653_0004, Tracking URL = http://ibm-biginsight.com:8088/proxy/application_1481788223653_0004/

INFO : Kill Command = /usr/iop/4.1.0.0/hadoop/bin/hadoop job -kill job.

when I check the hive log , it shows that "ERROR mr.ExecDriver (ExecDriver.java:execute(400)) - yarn"

I do not know what to do . I have been stuck around one month , I am unable to do any thing . but I can do queries like show databases or show tables.I have attache the logs .please help to resolve my problem in ambari and my hadoop version is 2.1

hiveserver2log-for-count-query.txthiveserver2-insert-query.txtyarn-log-for-count-query.txtyarn-log-for-insert.txt

1 ACCEPTED SOLUTION

avatar
Expert Contributor

The Issue seems like node got blacklisted and its a 1 node cluster..

I would recommend to cleanup the files.

Identify the large size file in hdfs using hdfs dfs -ls -R / report.Delete it from /user/<username>(Ex;- hdfs,hive) /.Trash folder too*****

Clean up the /tmp space in hdfs layer.

Make sure all required services are running ..restart the same.

Now monitor the storage space on that node using command "df -h"

Verify a smoke test and then execute your application job.

View solution in original post

6 REPLIES 6

avatar
Guru

From looking at your YARN log, it looks like the local directory for the YARN data are full. See this post for cleaning this up:

https://community.hortonworks.com/questions/35751/manage-yarn-local-log-dirs-space.html

2016-12-15 15:50:10,986 WARN  nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(248)) - Directory /hadoop/yarn/local error, used space above threshold of 90.0%, removing from list of valid directories
2016-12-15 15:50:10,987 WARN  nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(248)) - Directory /hadoop/yarn/log error, used space above threshold of 90.0%, removing from list of valid directories
2016-12-15 15:50:10,989 ERROR nodemanager.LocalDirsHandlerService (LocalDirsHandlerService.java:updateDirsAfterTest(356)) - Most of the disks failed. 1/1 local-dirs are bad: /hadoop/yarn/local; 1/1 log-dirs are bad: /hadoop/yarn/log

avatar
Rising Star

I went to /hadoop/yarn/log and deleted all and restarted yarn , ut i got same error , do you have another solution ?

avatar
Rising Star

I went to /hadoop/yarn/log and deleted all and restarted yarn , ut i got same error , do you have another solution ?hiveserver2.txtyarn-yarn-nodemanager-ibm-biginsight.txtyarn-yarn-timelineserver-ibm-biginsight-2.txt

avatar
Expert Contributor

The Issue seems like node got blacklisted and its a 1 node cluster..

I would recommend to cleanup the files.

Identify the large size file in hdfs using hdfs dfs -ls -R / report.Delete it from /user/<username>(Ex;- hdfs,hive) /.Trash folder too*****

Clean up the /tmp space in hdfs layer.

Make sure all required services are running ..restart the same.

Now monitor the storage space on that node using command "df -h"

Verify a smoke test and then execute your application job.

avatar
Rising Star

ok, let me try , i will tell the result, thanks

avatar
Rising Star

thanks you , after I deleted log files , my usage space reduced from 98% to 74 % . then I could insert , but could you tell me how can I delete other unused files in ambari or how can I find the path then I can delete the manually?