Support Questions

Find answers, ask questions, and share your expertise

Unable to run Hive query through putty

avatar
Explorer

Hi Team,

We are trying to insert data into Hive QL , however it's failing.

 

Error message as under :

 

 

The error message is as under 

 

ERROR : Failed to execute tez graph.
java.io.IOException: The ownership on the staging directory hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca is not as expected. It is owned by root. The directory must be owned by the submitter <hive_user> or by <hive_user>

 

Pls suggest.

 

The following folder is owned by root instead of <hive-user>

hdfs://<OneFSShare>.dev.cds:8020/tmp/hive/<hive_user>/_tez_session_dir/f8ad4086-ef7b-4194-a050-d15dba6913ca 

 

 

20 REPLIES 20

avatar
Master Mentor

@ARVINDR 

This is a permission issue in hdfs, so what you should do is as follow to resolve the issue

Assuming you are logged in as root  change recursively the directory owner to [user:group ] hive:hdfs

# su - hdfs

$ hdfs dfs -chown -R hive:hdfs /tmp/hive

Check the ownership now

$ hdfs dfs -ls /tmp

Desired output

The output should match the  below snippet 

drwxrwxrwx - hive hdfs 0 2018-12-12 23:43 /tmp/hive


Now if you re-run HQL it should succeed.


Please let me know

avatar
Explorer

Hi @Shelton 

I changed the permissions as advised.

However, getting the same error.

The new folder inside _tez_session_dir gets created with root ownership instead of <hive_user> 

 

Pls note that <hive_user> has been mapped to root at Isilon side.

 

I tried creating another user hdpuser1 using below commands in mysql after login as root

 

create user 'hdpuser1'@'%'  identified by 'xxxxx';
grant all on hive.* TO 'hdpuser1'@'%' IDENTIFIED BY 'xxxxx';

FLUSH PRIVILEGES;

 

However, this doesn't work.

20/03/24 02:24:55 [main]: WARN jdbc.HiveConnection: Failed to connect to <FQDN of Data Node>:10000

20/03/24 02:24:55 [main]: ERROR jdbc.Utils: Unable to read HiveServer2 configs from ZooKeeper​

Error: Could not open client transport for any of the Server URI's in ZooKeeper: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Username: 'hdpuser1' not found. Make sure your client's username exists on the cluster (state=08S01,code=0)​

Beeline version 3.1.0.3.0.1.0-187 by Apache Hive​

beeline>​

avatar
Master Mentor

@ARVINDR 

The user hdpuser1 should exist locally, to check whether you have a similar output. Did you run  the below.

# useradd hdpuser1

To check run the below

# cat /etc/passwd | grep hdpuser1
hdpuser1:x:1000:1000:hdpuser1:/home/hdpuser1:/bin/bash

 

I am not an expert on Isilon so user mapping is something I need to educate myself o. Do you have some documentation that I could read?

 

 

 

 

 

avatar
Explorer

Hi @Shelton,

 

Yes I did that & I can see the user in /etc/passwd file

 

I do not have any documentation on user mapping as currently it's controlled by EMC team (DELL Support) 

avatar
Super Guru

Are you running hive/beeline as the hive user or root?

 

To get to Hive as the hive user I do:

 

sudo su - hive
hive 

 

Then execute the queries.

 

avatar
Master Mentor

@ARVINDR 

In addition to @stevenmatison     switch command which people usually ignore 

Add a Hive proxy
To prevent network connection or notification problems, you must add a hive user proxy for HiveServer Interactive service to access the Hive Metastore

 

Steps:

In Ambari, select Services > HDFS > Configs > Advanced.
In Custom core-site, add the FQDNs of the HiveServer Interactive host or hosts to the value of hadoop.proxyuser.hive.hosts.
Save the changes.

Can you also set the hive.server2.enable.doAs

hive.server2.enable.doAs=true --> Run hive scripts as end user instead of Hive user.

hive.server2.enable.doAs=false --> All the jobs will run as hive user.

 

 

 

avatar
Explorer

Thanks @Shelton

 

However, observed below error in Hive within Ambari when tried to re-install hive service

 

File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call     raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'hadoop --config /usr/hdp/3.0.1.0-187/hadoop/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/tmp/hdfs_resources_1585061727.13.json' returned 1. Initializing filesystem uri: hdfs://<masternode>:8020 Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop/mapreduce.tar.gz, target=/hdp/apps/3.0.1.0-187/mapreduce/mapreduce.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/tez, type=directory, action=create, owner=hdfs-US01QC1, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/var/lib/ambari-agent/tmp/tez-native-tarball-staging/tez-native.tar.gz, target=/hdp/apps/3.0.1.0-187/tez/tez.tar.gz, type=file, action=create, owner=hdfs-<user>, group=hadoop-US01QC1, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/hive, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hive/hive.tar.gz, target=/hdp/apps/3.0.1.0-187/hive/hive.tar.gz, type=file, action=create, owner=hdfs-US01QC1, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/hdp/apps/3.0.1.0-187/mapreduce, type=directory, action=create, owner=hdfs-<user>, group=null, mode=555, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=/usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-streaming.jar, target=/hdp/apps/3.0.1.0-187/mapreduce/hadoop-streaming.jar, type=file, action=create, owner=hdfs-<user>, group=hadoop-<user>, mode=444, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Creating: Resource [source=null, target=/warehouse/tablespace/external/hive/sys.db/, type=directory, action=create, owner=hive, group=null, mode=1755, recursiveChown=false, recursiveChmod=false, changePermissionforParents=false, manageIfExists=true] in default filesystem Exception occurred, Reason: Error mapping uname 'hive' to uid org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Error mapping uname 'hive' to uid at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy9.setOwner(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setOwner(ClientNamenodeProtocolTranslatorPB.java:470) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy10.setOwner(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setOwner(DFSClient.java:1908) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1770) at org.apache.hadoop.hdfs.DistributedFileSystem$36.doCall(DistributedFileSystem.java:1767) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setOwner(DistributedFileSystem.java:1780) at org.apache.ambari.fast_hdfs_resource.Resource.setOwner(Resource.java:276) at org.apache.ambari.fast_hdfs_resource.Runner.main(Runner.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

avatar
Master Mentor

@ARVINDR 

Why did you have to re-install the service? The issue is with

Reason: Error mapping uname 'hive' to uid remember the hive uid is mapped to root. I am not an Isilon expert so I am really handicapped here

Please let me know 

avatar
Explorer

@Shelton

We thought of re-installing hive service where earlier it was running on data node & we thought of re-enabling it on master node.

The idea was to check if there are permission issues still