Member since
05-16-2016
7
Posts
13
Kudos Received
0
Solutions
04-21-2017
08:32 PM
3 Kudos
Problem Statement: User cannot modify or start restricted components such as getHDFS in NIFI 1.1 with Ranger enabled. Solution: In Ranger you must grant /restricted_components permissions to the user in order for them to have access to the restricted components in NIFI.
... View more
Labels:
12-30-2016
09:51 PM
3 Kudos
Error: Error: Could not open client transport for any of the Server URI's in ZooKeeper: Unable to read HiveServer2 configs from ZooKeeper (state=08S01,code=0) Issue: Beeline connection failing using Kerberos principal and ServiceDiscoveryMode beeline -u "jdbc:hive2://zk1.com:2181,zk2.com:2181,zk3.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=KERBEROS;principal=myuser/hiverserver2@EXAMPLE.COM;httpPath=cliservice;saslQop=auth-conf" Resolution: We do not need to pass the kerberos principal within the beeline command because we are using zookeeper. You must already have valid Kerberos ticket. Example: kinit myuser beeline -u "jdbc:hive2://zk1.com:2181,zk2.com:2181,zk3.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary;httpPath=cliservice;saslQop=auth-conf" Note: You must have a valid ticket first. If no valid ticket is present you will see an error like the following: The error in the log "echanism level: Failed to find any Kerberos tgt"
... View more
Labels:
12-30-2016
09:00 PM
3 Kudos
Error: DEBUG [master:hbasrmaster01:60000.oldLogCleaner] master.ReplicationLogCleaner: Found log in ZK, keeping: hbasregion.abcd.com%2C60020%2C1446216523936.1475050795365 The duration for which WAL files are kept is controlled by hbase.master.logcleaner.ttl - default is 10 minutes. If the replication peer can be dropped (for longer than the above config value), WAL files would be cleaned.
After that the replication peer can be added back. Issue: Hbase OldWAL's in /apps/hbase/data/oldWALs not getting deleted Root Cause: Hbase had a peer that prevents the deletion of old files. In hbase shell: To List Peers:
list_peers To Disable Peers:
disable_peer("1") To Remove Peer:
remove_peer ("1") List agin to verify removal:
list_peers Next tail the Hbase Master log file to ensure the deletion was working. Also look at hbase replication folder in hdfs. Lastly we re-added the peer:
add_peer '<n>', "slave.zookeeper.quorum:zookeeper.clientport.:zookeeper.znode.parent"
... View more
Labels:
12-30-2016
02:21 PM
3 Kudos
The following steps explain how to configure LDAP for Zeppelin 1) Make sure you can do an ldapsearch with the System Username that has AD permissions to query your OU. Example: ldapsearch -h 10.1.1.10:389 -D adsystem@ABC.YOURCO.COM -w abc123 -b OU=users,DC=ABC,DC=YOURCO,DC=COM dn 2) Using Ambari go into Zeppelin Configs and Advanced Zeppelin-env. 3) Edit the shiro_ini_content by adding the following parameters (remove existing first and replace with new): [users]
admin = yourpassword,admin
[main]
adRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm adRealm.url = ldap://10.1.1.10 adRealm.searchBase = OU=users,DC=ABC,DC=YOURCO,DC=COM
adRealm.systemUsername = adsystem@ABC.YOURCO.COM adRealm.systemPassword = abc123 adRealm.principalSuffix = @ABC.YOURCO.COM adRealm.authorizationCachingEnabled = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000 cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager securityManager.realms = $adRealm shiro.loginUrl = /api/login [roles]
[urls]
/api/version = anon
/api/interpreter/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/** = authcBasic 4) Save changes in Ambari. 5) Restart Zeppelin.
... View more
Labels:
12-29-2016
03:00 PM
I found that my change to the HDAOOP_NAMENODE_OPTS was not taking effect when editing with Ambari. I resolved this with the following: When using Ambari to edit the hadoop-env template I added: -Dhadoop.root.logger=INFO,ZIPRFA" export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS} -Dhadoop.root.logger=INFO,ZIPRFA" Do not add to the section of the template that says: {% if java_version < 8 %} unless you are using Java version 1.7 or below. You need to add it to: the section {% else %} After adding and restarting HDFS my NameNode logs were rotating and zipping correctly.
... View more