Member since
03-21-2016
233
Posts
62
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
923 | 12-04-2020 07:46 AM | |
1198 | 11-01-2019 12:19 PM | |
1632 | 11-01-2019 09:07 AM | |
2552 | 10-30-2019 06:10 AM | |
1281 | 10-28-2019 10:03 AM |
02-05-2017
05:32 AM
@Raja Sekhar Chintalapati Can you share the hivemetastore log? Kerberos principal should have 3 parts would mean that kerberos principal provided for auth is incomplete ,this can happen if you have provided principal like hive/master2.chrsv.com(excluding REALM name) If you are trying to start from Ambari then you should see output.log and error.log from here we can see which principal is being used while starting the service and correct it in config according to that error.
... View more
02-03-2017
08:26 PM
Can you please provide shiro ini content ?
... View more
02-03-2017
05:47 PM
2 Kudos
@Colin Cunningham By default zeppelin is configured with anonymous authentcation. You must set [url] section in shiro_ini of zeppelin service as below. And without AD/LDAP authencation you can set the usernames in [users]. After this changes restart zeppelin services. Now with /api/anon commented you must login with user name set in [users] [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
maria_dev = <password> , admin #### Add this line
#admin = password1, admin
#user1 = password2, role1, role2
#user2 = password3, role3
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/** = anon ###Comment anon
/** = authc ####Add this line
... View more
01-30-2017
01:08 PM
Please set it as below and try [root@sandbox Lab7.2]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 sandbox.hortonworks.com sandbox #####added sandbox
... View more
01-30-2017
12:35 PM
1 Kudo
@Rav Reddy UnknownHostException for hostname sandbox. Please verify that /etc/hosts or dns has the hostname entry "sandbox" on the host you are trying to use hive command. Like #vi /etc/hosts <IPAddre> sandbox.<FQDN> sandbox
... View more
01-30-2017
06:25 AM
3 Kudos
@Arkaprova Saha Can you please make sure you have below properties set in kms-site.xml. Refer below doc about setting these properties. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_Ranger_KMS_Admin_Guide/content/ch02s01.html hadoop.kms.proxyuser.hive.users
hadoop.kms.proxyuser.oozie.users
hadoop.kms.proxyuser.HTTP.users
hadoop.kms.proxyuser.ambari.users
hadoop.kms.proxyuser.yarn.users
hadoop.kms.proxyuser.hive.hosts
hadoop.kms.proxyuser.oozie.hosts
hadoop.kms.proxyuser.HTTP.hosts
hadoop.kms.proxyuser.ambari.hosts
hadoop.kms.proxyuser.yarn.hosts
hadoop.kms.proxyuser.keyadmin.groups
hadoop.kms.proxyuser.keyadmin.hosts
hadoop.kms.proxyuser.keyadmin.users
... View more
01-27-2017
08:10 PM
@amit Kumar Connection refused would mean that port is not listening. Please verify on the host if port 16000 is in listen status #netstat -an | grep 16000 Make sure that port is in Listen status for either 0.0.0.0:16000 or <IPofabovehost>:16000
... View more
01-19-2017
01:38 PM
@Sankar T hdfs-audit.log will have that info.
... View more
01-19-2017
05:37 AM
@Jacqualin jasmin dfs.datanode.data.dir can be any directory which is available on the datanode. It can be a directory where disk partitions are mounted like '/u01/hadoop/data, /u02/hadoop/data' which is in case if you have multiple disks partitions to be used for hdfs purpose.
... View more
01-11-2017
05:57 PM
1 Kudo
@elkan li Are you using cgroups in yarn? If yes, by default yarn local user or user running AM is set to "nobody" in . (Check if CPU scheduling and CPU isolation enabled in yarn configs from Ambari UI which actually sets cgroups). In this case AM failed as nobody user ID is below 1000 which is set with property "Minimum user ID for submitting job". You can bypass this by changing nobody userID in all nodemanagers to above 1000.
... View more