Member since
05-16-2016
7
Posts
13
Kudos Received
0
Solutions
05-08-2017
08:45 PM
1 Kudo
Question: I have a spark-to-tableau-0.1.0.tar.gz package and I want to know where to place the jar files within the tar file?
... View more
Labels:
04-21-2017
08:32 PM
3 Kudos
Problem Statement: User cannot modify or start restricted components such as getHDFS in NIFI 1.1 with Ranger enabled. Solution: In Ranger you must grant /restricted_components permissions to the user in order for them to have access to the restricted components in NIFI.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- Issue Resolution
- NiFi
- nifi-processor
- Ranger
- ranger-hdfs-plugin
Labels:
12-30-2016
09:51 PM
3 Kudos
Error: Error: Could not open client transport for any of the Server URI's in ZooKeeper: Unable to read HiveServer2 configs from ZooKeeper (state=08S01,code=0) Issue: Beeline connection failing using Kerberos principal and ServiceDiscoveryMode beeline -u "jdbc:hive2://zk1.com:2181,zk2.com:2181,zk3.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=KERBEROS;principal=myuser/hiverserver2@EXAMPLE.COM;httpPath=cliservice;saslQop=auth-conf" Resolution: We do not need to pass the kerberos principal within the beeline command because we are using zookeeper. You must already have valid Kerberos ticket. Example: kinit myuser beeline -u "jdbc:hive2://zk1.com:2181,zk2.com:2181,zk3.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary;httpPath=cliservice;saslQop=auth-conf" Note: You must have a valid ticket first. If no valid ticket is present you will see an error like the following: The error in the log "echanism level: Failed to find any Kerberos tgt"
... View more
- Find more articles tagged with:
- beeline
- hiveserver2
- hiveserver2-ssl-kerberos
- Issue Resolution
- Kerberos
- Security
- Zookeeper
Labels:
12-30-2016
09:00 PM
3 Kudos
Error: DEBUG [master:hbasrmaster01:60000.oldLogCleaner] master.ReplicationLogCleaner: Found log in ZK, keeping: hbasregion.abcd.com%2C60020%2C1446216523936.1475050795365 The duration for which WAL files are kept is controlled by hbase.master.logcleaner.ttl - default is 10 minutes. If the replication peer can be dropped (for longer than the above config value), WAL files would be cleaned.
After that the replication peer can be added back. Issue: Hbase OldWAL's in /apps/hbase/data/oldWALs not getting deleted Root Cause: Hbase had a peer that prevents the deletion of old files. In hbase shell: To List Peers:
list_peers To Disable Peers:
disable_peer("1") To Remove Peer:
remove_peer ("1") List agin to verify removal:
list_peers Next tail the Hbase Master log file to ensure the deletion was working. Also look at hbase replication folder in hdfs. Lastly we re-added the peer:
add_peer '<n>', "slave.zookeeper.quorum:zookeeper.clientport.:zookeeper.znode.parent"
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- FAQ
- HBase
- replication
- wal
- Zookeeper
Labels:
12-30-2016
02:21 PM
3 Kudos
The following steps explain how to configure LDAP for Zeppelin 1) Make sure you can do an ldapsearch with the System Username that has AD permissions to query your OU. Example: ldapsearch -h 10.1.1.10:389 -D adsystem@ABC.YOURCO.COM -w abc123 -b OU=users,DC=ABC,DC=YOURCO,DC=COM dn 2) Using Ambari go into Zeppelin Configs and Advanced Zeppelin-env. 3) Edit the shiro_ini_content by adding the following parameters (remove existing first and replace with new): [users]
admin = yourpassword,admin
[main]
adRealm = org.apache.shiro.realm.activedirectory.ActiveDirectoryRealm adRealm.url = ldap://10.1.1.10 adRealm.searchBase = OU=users,DC=ABC,DC=YOURCO,DC=COM
adRealm.systemUsername = adsystem@ABC.YOURCO.COM adRealm.systemPassword = abc123 adRealm.principalSuffix = @ABC.YOURCO.COM adRealm.authorizationCachingEnabled = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000 cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager securityManager.realms = $adRealm shiro.loginUrl = /api/login [roles]
[urls]
/api/version = anon
/api/interpreter/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/** = authcBasic 4) Save changes in Ambari. 5) Restart Zeppelin.
... View more
- Find more articles tagged with:
- How-ToTutorial
- LDAP
- Security
- shiro
- zeppelin
- zeppelin-notebook
Labels:
12-29-2016
05:16 PM
This is a known Ambari bug: https://issues.apache.org/jira/browse/AMBARI-17539 This is resolved in Ambari Agent 2.4.0 As a work around you can modify the main.py 1. Stop Ambari agent
2. Backup file mv /usr/lib/python2.6/site-packages/ambari_agent/main.py /tmp/main.py.backup 3. Edit main.py under this path : /usr/lib/python2.6/site-packages/ambari_agent/ Add the following at the beginning of the file: def fix_subprocess_racecondition(): """ Subprocess in Python has race condition with enabling/disabling gc. Which may lead to turning off python garbage collector. This leads to a memory leak. This function monkey patches subprocess to fix the issue. !!! PLEASE NOTE THIS SHOULD BE CALLED BEFORE ANY OTHER INITIALIZATION was done to avoid already created links to subprocess or subprocess.gc or gc """ # monkey patching subprocess import subprocess subprocess.gc.isenabled = lambda: True # re-importing gc to have correct isenabled for non-subprocess contexts import sys del sys.modules['gc'] import gc fix_subprocess_racecondition()
4. Start Ambari agent Please do this on one host and monitor the memory usage. If memory usage looks OK, then replace main.py on all hosts.
... View more
12-29-2016
03:00 PM
I found that my change to the HDAOOP_NAMENODE_OPTS was not taking effect when editing with Ambari. I resolved this with the following: When using Ambari to edit the hadoop-env template I added: -Dhadoop.root.logger=INFO,ZIPRFA" export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS} -Dhadoop.root.logger=INFO,ZIPRFA" Do not add to the section of the template that says: {% if java_version < 8 %} unless you are using Java version 1.7 or below. You need to add it to: the section {% else %} After adding and restarting HDFS my NameNode logs were rotating and zipping correctly.
... View more