Support Questions

Find answers, ask questions, and share your expertise

Hive select query error in Beeline

avatar
Explorer

Hi,

I'm trying to execute Hive (select ) query on an external table through beeline & getting below error 

 

:Error while compiling statementFAILEDNullPointerException null (state=42000,code=40000).

 

Appreciate if any clue on this.

 

Thanks !

 

5 REPLIES 5

avatar

Please share the full exception from beeline, if full exception is not available try with --verbose on the beeline command

avatar
Explorer

Hi  venkatsambath,

Thanks for the response.

 

Will check permissions for hive metastore.

 

2020-03-06T09:14:36,393 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des HDFS audit. Event Size:1
2020-03-06T09:14:36,393 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_deso consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_des
2020-03-06T09:14:36,393 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des sleeping for 30000 milli seconds. indexQueue=0, queueName=hiveServer2.async.multi_dest.batch
2020-03-06T09:15:36,394 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_desg: name=hiveServer2.async.multi_dest.batch.hdfs, interval=01:00.015 minutes, events=1, deferr
2020-03-06T09:15:36,394 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_desg HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml,ite.xml
2020-03-06T09:15:36,412 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des whether log file exists. hdfPath=hdfs://localhost:xxxx/hiveServer2/xxxxx/hiveServer2_rang
2020-03-06T09:15:36,413 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_deso log file.
java.net.ConnectException: Call From XXXXXX.XXXXX.XXX/10.21.16.60 to localhost:8020 ed; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor61.newInstance(Unknown Source) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAcc
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_181]
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) ~[hadoop-common-
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) ~[hadoop-common-3.
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) ~[hadoop-common-3.1.
at org.apache.hadoop.ipc.Client.call(Client.java:1443) ~[hadoop-common-3.1.1.3.0.1.0-
at org.apache.hadoop.ipc.Client.call(Client.java:1353) ~[hadoop-common-3.1.1.3.0.1.0-
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(C1.0-187.jar:?]
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHand
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocatio
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandl
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationH
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.ja
at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1654) ~[hadoop-hdfs-cl
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.j
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) ~[hadoop-common-3.1.1
at org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAudi
at org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDesti
at org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestinatio
at org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestinatio
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:173
at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:52
at org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestinat
at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[
at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827)
at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_181]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_1
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[hadoop-common-3.1.1.3.
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:687) ~[hadoop-
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:790) ~[hadoop-c
at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410) ~[hadoop-comm
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1558) ~[hadoop-common-3.1.1
at org.apache.hadoop.ipc.Client.call(Client.java:1389) ~[hadoop-common-3.1.1.3.0.1.0-

avatar
Super Guru
[hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_deso log file.
java.net.ConnectException: Call From xxx.xxx.xxx/10.21.16.60 to localhost:8020 ed; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

 

Check your permissions for Hive User: Ranger? MetaStore?   Also something I try to avoid is using or letting components run as "localhost".  If you can, always using the fqdn for all services.  In clustered environment this fqdn is required.  "Localhost" can be the wrong "host" depending on which node you are working in.

 

 

avatar
Explorer

Hi @stevenmatison,

 

We have another cluster with similar user where we are not facing any issue.

Also, we have not set up any Ranger policies yet.

 

Could you pls advise ?

 

avatar
Contributor

https://issues.apache.org/jira/browse/HIVE-13037

 

 

Perhaps this may make you see some insights