Member since
02-12-2020
40
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4214 | 03-09-2020 05:24 AM |
03-26-2020
03:40 AM
getting below error for HivServer2 2020-03-26 05:36:05,838 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <master node server>:2181,us01qc1hdpdn01.dev.cds:2181,<date node server>:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2020-03-26 05:36:06,497 - call returned (1, 'Node does not exist: /hiveserver2') 2020-03-26 05:36:06,498 - Will retry 1 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s)
... View more
03-26-2020
12:38 AM
Hi @dayphache Could you pls elaborate your response ? I can see only 33 in the reply , didin't get what it means.
... View more
03-26-2020
12:10 AM
Hi @Shelton, HDP version is 3.0.1 We managed to change the hive user name in service accounts. However , hive is unable to come up with below error Error: org.apache.hive.jdbc.ZooKeeperHiveClientException : Unable to read HiveServer2 configs from ZooKeeper (state=,code=0) Pls advise. Thanks !
... View more
03-24-2020
10:35 AM
Hi @Shelton Is it possible to change the service user name through linux? If so pls share the details. Thanks !
... View more
03-24-2020
09:28 AM
Right @Shelton Is there any way to modify service user account in Ambari we need to change value for hive to hive_xxxx which has been configured at Isilon side. Due to this we are getting hive error & it needs to be fixed as above as per Linux admin team. e.g. within Ambari we need to change following hive user to hive_xxx Service Users and Groups Hive User hive
... View more
03-24-2020
08:36 AM
@Shelton We thought of re-installing hive service where earlier it was running on data node & we thought of re-enabling it on master node. The idea was to check if there are permission issues still
... View more
03-24-2020
08:01 AM
Thanks @Shelton However, observed below error in Hive within Ambari when tried to re-install hive service File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'hadoop --config /usr/hdp/3.0.1.0-187/hadoop/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/tmp/hdfs_resources_1585061727.13.json' returned 1. Initializing filesystem uri: hdfs://<masternode>:8020 Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz, target=/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/tez, type=directory, action=create, owner=hdfs-US01QC1, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz, target=/hdp/apps/3.0.1.0-187/tez/tez.tar.gz, type=file, action=create, owner=hdfs-<user>, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/hive, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hive/hive.tar.gz, target=/hdp/apps/3.0.1.0-187/hive/hive.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-streaming.jar, target=/hdp/apps/3.0.1.0-187/mapreduce/hadoop-streaming.jar, type=file, action=create, owner=hdfs-<user>, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/warehouse/tablespace/external/hive/sys.db/, type=directory, action=create, owner=hive, group=null, mode=1755, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Exception occurred, Reason: Error mapping uname 'hive' to uid org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Error mapping uname 'hive' to uid at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy9.setOwner(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setOwner(ClientNamenodeProtocolTranslatorPB.java:470) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy10.setOwner(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setOwner(DFSClient.java:1908) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1770) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1767) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setOwner(DistributedFileSystem.java:1780) at org.apache.ambari.fast_hdfs_resource.Resource.setOwner(Resource.java:276) at org.apache.ambari.fast_hdfs_resource.Runner.main(Runner.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
... View more
03-24-2020
06:21 AM
Hi @Shelton, Yes I did that & I can see the user in /etc/passwd file I do not have any documentation on user mapping as currently it's controlled by EMC team (DELL Support)
... View more
03-24-2020
12:28 AM
Hi @Shelton I changed the permissions as advised. However, getting the same error. The new folder inside _tez_session_dir gets created with root ownership instead of <hive_user> Pls note that <hive_user> has been mapped to root at Isilon side. I tried creating another user hdpuser1 using below commands in mysql after login as root create user 'hdpuser1'@'%' identified by 'xxxxx'; grant all on hive.* TO 'hdpuser1'@'%' IDENTIFIED BY 'xxxxx'; FLUSH PRIVILEGES; However, this doesn't work. 20/03/24 02:24:55 [main]: WARN jdbc.HiveConnection: Failed to connect to <FQDN of Data Node>:10000 20/03/24 02:24:55 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 configs from ZooKeeper Error: Could not open client transport for any of the Server URI's in ZooKeeper: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Username: 'hdpuser1' not found. Make sure your client's username exists on the cluster (state=08S01,code=0) Beeline version 3.1.0.3.0.1.0-187 by Apache Hive beeline>
... View more
03-23-2020
11:57 AM
Hi Team,
We are trying to insert data into Hive QL , however it's failing.
Error message as under :
The error message is as under
ERROR : Failed to execute tez graph. java.io.IOException: The ownership on the staging directory hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca is not as expected. It is owned by root. The directory must be owned by the submitter <hive_user> or by <hive_user>
Pls suggest.
The following folder is owned by root instead of <hive-user>
hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca
... View more
Labels:
- Labels:
-
Apache Hive