Member since
08-05-2017
30
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4905 | 08-11-2017 11:29 AM |
08-11-2017
11:29 AM
2 Kudos
I am able to find the solution, I was running Ranger admin on host1 and Hiveserver on host2 and i created the admin OS account in host1 but not in host2(where hive server is running). Creating the admin account and its group (hdpmasters) in host2 resolved this issue. I guess Ambari views might need the OS account/ Group to be present on the server where the service(being accessed by view) is installed.
... View more
08-11-2017
11:11 AM
@Jay SenSharma Will the below findings help, If i add admin user instead of its group hdpmasters in the Ranger hive policy. Both of the errors are not being shown anymore in the Hive view. The problem is only if i use its group 'hdpmasters' in the policy. Please find below [ec2-user@XXXXXXX ~]$ hdfs dfs -ls /user Found 9 items
drwxr-xr-x - admin hdpmasters 0 2017-08-11 07:02 /user/admin
[ec2-user@XXXXXX ~]$ id admin
uid=1012(admin) gid=1012(admin) groups=1012(admin),1001(hdpmasters) Why it is only working if i give 'admin' directly but not the group ?
... View more
08-11-2017
10:50 AM
@Jay SenSharma I have already tried setting it to '*' but still not working.
... View more
08-11-2017
10:46 AM
Adding to my question, I have added admin OS account into a group 'hdpmasters' and i used this group while configuring Hive Ranger policy. If i use 'admin' instead of group in the policy configuration, strangely i am not able to see the error "FAILED:HiveAccessControlExceptionPermission denied: user [admin] does not have [USE] privilege on [null]" anymore(I can view the default database in Ambari view). Can i know why this is happening ?
... View more
08-11-2017
10:09 AM
Dear experts, I am running HDP 2.4 on EC2 cloud. Recently i have installed Ranger and integrated Hive. When i am trying to use the Ambari Hive view using the admin account, i am receiving the below two errors. Could you please help ? i am attaching few screenshots showing the configurations required. admin-hdfs-policy.png admin-hive-policy.png proxy.png admin OS account is working fine: ---------------------------------------------- [ec2-user@XXXXXXXXX ~]$ id admin uid=1012(admin) gid=1012(admin) groups=1012(admin),1001(hdpmasters) [ec2-user@XXXXXXXX ~]$ hdfs dfs -ls /user Found 9 items drwxr-xr-x - admin hdfs 0 2017-08-11 04:35 /user/admin
[admin@XXXXXXXX ~]$ hdfs dfs -ls /user/admin Found 2 items drwxr-xr-x - admin hdfs 0 2017-08-11 05:33 /user/admin/.hiveJars drwxr-xr-x - admin hdfs 0 2017-08-11 04:35 /user/admin/testing
[admin@XXXXXX ~]$ hive
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.4.3.0-227/0/hive-log4j.properties
hive> show databases; OK default Time taken: 1.044 seconds, Fetched: 1 row(s) Errors: ----------------------------- Failed to execute statement: show databases like '*' org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [admin] does not have [USE] privilege on [null]
E090 HDFS020 Could not write file /user/admin/hive/jobs/hive-job-5-2017-08-11_05-43/query.hql [HdfsApiException] org.apache.ambari.view.utils.hdfs.HdfsApiException: HDFS020 Could not write file /user/admin/hive/jobs/hive-job-5-2017-08-11_05-43/query.hql
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: root from IP XXXXXXXXX
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Ranger
08-06-2017
11:24 AM
Thanks alot, increasing the stack size as suggested for nfs gateway helped. Thanks again, you have resolved all my issues today 🙂
... View more
08-06-2017
06:30 AM
I have tried changing ulimit as suggested and restarted the gateway but still no luck. I dont see any .log file but i am ale to get few details as below, /var/log/hadoop/root nfs3_jsvc.out
------------------------- A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f7b0a23bb7c, pid=19469, tid=140166720608064
#
# JRE version: (8.0_77-b03) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# j java.lang.Object.<clinit>()V+0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid19469.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
hadoop-hdfs-nfs3-XXXXXXX.out ------------------------------------------------------- ulimit -a for privileged nfs user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63392
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
... View more
08-06-2017
05:58 AM
Dear experts, I am running HDP 2.4.3 with Ambari 2.4 on AWS EC2 instances running on Red Hat Enterprise Linux Server release 7.3 (Maipo). Whenever i start the NFSGATEWAY service on a host , it is automatically getting stopped after sometime. Could you please assist me on this ? Even i try to kill the existing nfs3 process and restart the service, the issue still persists. Please find few details below, ps -ef | grep nfs3 ---------------------------------------------------------- root 9766 1 0 01:42 pts/0 00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.4.3.0-227/hadoop/lib/*:/usr/hdp/2.4.3.0-227/hadoop/.//*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/./:/usr/hdp/2.4.3.0-227/hadoop-hdfs/lib/*:/usr/hdp/2.4.3.0-227/hadoop-hdfs/.//*:/usr/hdp/2.4.3.0-227/hadoop-yarn/lib/*:/usr/hdp/2.4.3.0-227/hadoop-yarn/.//*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/lib/*:/usr/hdp/2.4.3.0-227/hadoop-mapreduce/.//*::/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf:/usr/hdp/2.4.3.0-227/tez/*:/usr/hdp/2.4.3.0-227/tez/lib/*:/usr/hdp/2.4.3.0-227/tez/conf -Xmx1024m -Dhdp.version=2.4.3.0-227 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.4.3.0-227 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-ip-10-0-0-223.ap-south-1.compute.internal.log -Dhadoop.home.dir=/usr/hdp/2.4.3.0-227/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter systemctl status rpcbind --------------------------------------------------
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Active: active (running) since Sun 2017-08-06 01:29:31 EDT; 18min ago
Main PID: 6164 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─6164 /sbin/rpcbind -w
... View more
08-06-2017
05:21 AM
Thank you, disabling the certificates as mentioned in https://access.redhat.com/articles/2039753#controlling-certificate-verification-7 helped
... View more
- « Previous
-
- 1
- 2
- Next »