Member since
02-12-2020
40
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1047 | 03-09-2020 05:24 AM |
07-16-2020
07:17 AM
@ Govins28 Its a test cluster
... View more
07-16-2020
04:42 AM
Hi Team,
getting following error while trying to do cleanup on master & worker nodes.
yum -y remove ruby-devel libganglia libconfuse hdp_mon_ganglia_addons postgresql -server yum -y remove postgresql postgresql-libs ganglia-gmond-python ganglia ganglia-gm etad ganglia-web yum -y remove ganglia-devel httpd mysql mysql-server mysqld puppet Loaded plugins: fastestmirror, langpacks, priorities Resolving Dependencies --> Running transaction check ---> Package ambari-agent.x86_64 0:2.7.1.0-169 will be erased ---> Package ambari-metrics-hadoop-sink.x86_64 0:2.7.1.0-169 will be erased ---> Package ambari-metrics-monitor.x86_64 0:2.7.1.0-169 will be erased ---> Package ambari-server.x86_64 0:2.7.1.0-169 will be erased --> Finished Dependency Resolution http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0/repodat a/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/HDP-GPL /centos7/3.x/updates/3.0.1.0/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0/repodat a/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/HDP-GPL /centos7/3.x/updates/3.0.1.0/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0/repodat a/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/HDP-GPL /centos7/3.x/updates/3.0.1.0/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror.
... View more
Labels:
07-16-2020
04:25 AM
@Govins28 PFA NN logs. Ambari error is as under Connection failed to http://xxxxx.com:50070(<urlopen error [Errno 111] Connection refused>)
... View more
07-15-2020
03:39 AM
@Shelton While starting namenode manually facing issue PFA enclosed.
... View more
07-10-2020
04:20 AM
@Shelton Sure.. will follow the steps as advised & update you. Thanks !
... View more
07-09-2020
11:07 PM
Hi Team, We have a 2 node cluster setup of a Namenode and a Datanode. The name node is refusing to start and is failing because of following error. Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://inqchdpmn1.XXX.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From inqchdpmn1.XXX.com/10.10.31.71 to inqchdpmn1.XXX.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From inqchdpmn1.XXX.com/10.10.31.71 to inqchdpmn1.XXX.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2020-07-10 11:08:55,721 - bash-4.2$ sudo netstat -anp | grep 8020 doesn't give any o/p. We are stuck for a while now, please help. TIA.
... View more
Labels:
- Labels:
-
Apache Ambari
03-26-2020
07:18 AM
Hi @Shelton, We have disabled ranger authorization in Ambari & allow to run hive as end user instead of hive user. Still hiveserver2 is not coming up. 2020-03-26 05:36:05,838 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <master node>:2181,<datanode1>:2181,<data node3>:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2020-03-26 05:36:06,497 - call returned (1, 'Node does not exist: /hiveserver2') 2020-03-26 05:36:06,498 - Will retry 1 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) Any clue on this ?
... View more
03-26-2020
04:10 AM
Hi @Shelton Getting same message [zk: localhost:2181(CONNECTED) 8] getAcl /hiveserver2 Node does not exist: /hiveserver2 [zk: localhost:2181(CONNECTED) 9]
... View more
03-26-2020
03:40 AM
getting below error for HivServer2 2020-03-26 05:36:05,838 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <master node server>:2181,us01qc1hdpdn01.dev.cds:2181,<date node server>:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2020-03-26 05:36:06,497 - call returned (1, 'Node does not exist: /hiveserver2') 2020-03-26 05:36:06,498 - Will retry 1 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
... View more
03-26-2020
12:38 AM
Hi @dayphache Could you pls elaborate your response ? I can see only 33 in the reply , didin't get what it means.
... View more
03-26-2020
12:10 AM
Hi @Shelton, HDP version is 3.0.1 We managed to change the hive user name in service accounts. However , hive is unable to come up with below error Error: org.apache.hive.jdbc.ZooKeeperHiveClientException : Unable to read HiveServer2 configs from ZooKeeper (state=,code=0) Pls advise. Thanks !
... View more
03-24-2020
10:35 AM
Hi @Shelton Is it possible to change the service user name through linux? If so pls share the details. Thanks !
... View more
03-24-2020
09:28 AM
Right @ Shelton Is there any way to modify service user account in Ambari we need to change value for hive to hive_xxxx which has been configured at Isilon side. Due to this we are getting hive error & it needs to be fixed as above as per Linux admin team. e.g. within Ambari we need to change following hive user to hive_xxx Service Users and Groups Hive User hive
... View more
03-24-2020
08:36 AM
@ Shelton We thought of re-installing hive service where earlier it was running on data node & we thought of re-enabling it on master node. The idea was to check if there are permission issues still
... View more
03-24-2020
08:01 AM
Thanks @Shelton However, observed below error in Hive within Ambari when tried to re-install hive service File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'hadoop --config /usr/hdp/3.0.1.0-187/hadoop/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/tmp/hdfs_resources_1585061727.13.json' returned 1. Initializing filesystem uri: hdfs://<masternode>:8020 Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz, target=/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/tez, type=directory, action=create, owner=hdfs-US01QC1, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz, target=/hdp/apps/3.0.1.0-187/tez/tez.tar.gz, type=file, action=create, owner=hdfs-<user>, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/hive, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hive/hive.tar.gz, target=/hdp/apps/3.0.1.0-187/hive/hive.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-streaming.jar, target=/hdp/apps/3.0.1.0-187/mapreduce/hadoop-streaming.jar, type=file, action=create, owner=hdfs-<user>, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/warehouse/tablespace/external/hive/sys.db/, type=directory, action=create, owner=hive, group=null, mode=1755, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Exception occurred, Reason: Error mapping uname 'hive' to uid org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Error mapping uname 'hive' to uid at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy9.setOwner(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setOwner(ClientNamenodeProtocolTranslatorPB.java:470) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy10.setOwner(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setOwner(DFSClient.java:1908) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1770) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1767) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setOwner(DistributedFileSystem.java:1780) at org.apache.ambari.fast_hdfs_resource.Resource.setOwner(Resource.java:276) at org.apache.ambari.fast_hdfs_resource.Runner.main(Runner.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
... View more
03-24-2020
06:21 AM
Hi @Shelton, Yes I did that & I can see the user in /etc/passwd file I do not have any documentation on user mapping as currently it's controlled by EMC team (DELL Support)
... View more
03-24-2020
12:28 AM
Hi @Shelton I changed the permissions as advised. However, getting the same error. The new folder inside _tez_session_dir gets created with root ownership instead of <hive_user> Pls note that <hive_user> has been mapped to root at Isilon side. I tried creating another user hdpuser1 using below commands in mysql after login as root create user 'hdpuser1'@'%' identified by 'xxxxx'; grant all on hive.* TO 'hdpuser1'@'%' IDENTIFIED BY 'xxxxx'; FLUSH PRIVILEGES; However, this doesn't work. 20/03/24 02:24:55 [main]: WARN jdbc.HiveConnection: Failed to connect to <FQDN of Data Node>:10000 20/03/24 02:24:55 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 configs from ZooKeeper Error: Could not open client transport for any of the Server URI's in ZooKeeper: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Username: 'hdpuser1' not found. Make sure your client's username exists on the cluster (state=08S01,code=0) Beeline version 3.1.0.3.0.1.0-187 by Apache Hive beeline>
... View more
03-23-2020
11:57 AM
Hi Team,
We are trying to insert data into Hive QL , however it's failing.
Error message as under :
The error message is as under
ERROR : Failed to execute tez graph. java.io.IOException: The ownership on the staging directory hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca is not as expected. It is owned by root. The directory must be owned by the submitter <hive_user> or by <hive_user>
Pls suggest.
The following folder is owned by root instead of <hive-user>
hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca
... View more
Labels:
- Labels:
-
Apache Hive
03-16-2020
03:24 AM
Hi Team,
We are unable to start Hbase.
We are using Hadoop 3.0.1 on Isilon One FS
Error Message :
Creating: Resource [source=null, target=/apps/hbase/data, type=directory, action=create, owner=hbase-xxxxx, group=null, mode=null, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem
Exception occurred, Reason: Unexpected error: status: STATUS_MEDIA_WRITE_PROTECTED = 0xC00000A2 with path="/apps/hbase/data", username=hbase-xxxxx, groupname=
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected error: status: STATUS_MEDIA_WRITE_PROTECTED = 0xC00000A2 with path="/apps/hbase/data", username=hbase-xxxxx, groupname=
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
03-16-2020
03:18 AM
Hi @stevenmatison, We have another cluster with similar user where we are not facing any issue. Also, we have not set up any Ranger policies yet. Could you pls advise ?
... View more
03-12-2020
03:47 AM
Thanks for sharing documents @StevenOD
... View more
03-11-2020
07:07 PM
Thanks @steve Could you pls suggest native tools in this scenario ? Thanks & Regards
... View more
03-09-2020
08:15 AM
Hi venkatsambath, Thanks for the response. Will check permissions for hive metastore. 2020-03-06T09:14:36,393 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des HDFS audit. Event Size:1 2020-03-06T09:14:36,393 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_deso consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_des 2020-03-06T09:14:36,393 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des sleeping for 30000 milli seconds. indexQueue=0, queueName=hiveServer2.async.multi_dest.batch 2020-03-06T09:15:36,394 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_desg: name=hiveServer2.async.multi_dest.batch.hdfs, interval=01:00.015 minutes, events=1, deferr 2020-03-06T09:15:36,394 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_desg HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml,ite.xml 2020-03-06T09:15:36,412 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_des whether log file exists. hdfPath=hdfs://localhost:xxxx/hiveServer2/xxxxx/hiveServer2_rang 2020-03-06T09:15:36,413 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_deso log file. java.net.ConnectException: Call From XXXXXX.XXXXX.XXX/10.21.16.60 to localhost:8020 ed; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.GeneratedConstructorAccessor61.newInstance(Unknown Source) ~[?:?] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAcc at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_181] at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) ~[hadoop-common- at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755) ~[hadoop-common-3. at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) ~[hadoop-common-3.1. at org.apache.hadoop.ipc.Client.call(Client.java:1443) ~[hadoop-common-3.1.1.3.0.1.0- at org.apache.hadoop.ipc.Client.call(Client.java:1353) ~[hadoop-common-3.1.1.3.0.1.0- at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(C1.0-187.jar:?] at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHand at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocatio at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandl at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationH at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.ja at com.sun.proxy.$Proxy33.getFileInfo(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1654) ~[hadoop-hdfs-cl at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java: at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java: at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.j at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) ~[hadoop-common-3.1.1 at org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAudi at org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDesti at org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestinatio at org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestinatio at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:173 at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:52 at org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestinat at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[ at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_181] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_1 at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[ at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[hadoop-common-3.1.1.3. at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:687) ~[hadoop- at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:790) ~[hadoop-c at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410) ~[hadoop-comm at org.apache.hadoop.ipc.Client.getConnection(Client.java:1558) ~[hadoop-common-3.1.1 at org.apache.hadoop.ipc.Client.call(Client.java:1389) ~[hadoop-common-3.1.1.3.0.1.0-
... View more
03-09-2020
05:28 AM
Hi @StevenOD, Thanks for the details. Just one query, we are building Hadoop on top of Isilon , will the following still holds true in that case ? Thanks & Regards Arvind.
... View more
03-09-2020
05:24 AM
@Gomathinayagam Thanks for your prompt response & clarification !.
... View more
03-09-2020
01:37 AM
Hi,
Need help on below query.
We have a scenario where for any column defined with datatype as string in Hadoop, NULL value is loaded as blank. However, any column other than string datatype (INT,DOUBLE etc) NULL values are loaded as NULL in Hadoop.
Column name Data Type
service1end string
service1start string
service2end string
service2start string
firstlinemaintcost double
Is this a default behavior of Hadoop/hive?
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache Hive
03-05-2020
12:53 PM
Hi,
I'm trying to execute Hive (select ) query on an external table through beeline & getting below error
: Error while compiling statement : FAILED : NullPointerException null ( state=42000 , code=40000 ).
Appreciate if any clue on this.
Thanks !
... View more
Labels:
- Labels:
-
Apache Hive