Member since
10-01-2016
156
Posts
8
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8021 | 04-04-2019 09:41 PM | |
3108 | 06-04-2018 08:34 AM | |
1438 | 05-23-2018 01:03 PM | |
2942 | 05-21-2018 07:12 AM | |
1802 | 05-08-2018 10:48 AM |
01-22-2019
03:09 PM
@Ruslan Fialkovsky Can you open a new thread and share the problems you are encountering so you can get help, I am afraid this thread is not active. HTH
... View more
06-04-2018
09:18 AM
Great to know your LLAP started !!!
... View more
05-23-2018
01:03 PM
I have solved it. It was a connection problem as logs stated. I had to modify /etc/hosts file. servers have two network card. I switched ip-names of two card.
... View more
05-28-2018
08:53 AM
@Geoffrey Shelton Okot Yes it did. But it stopped saturday morning again. Some of logs: - org.apache.hadoop.hbase.DoNotRetryIOException: hconnection-0x55af174a closed
- ERROR [phoenix-update-statistics-3] stats.StatisticsScanner: Failed to update statistics table!
org.apache.hadoop.hbase.DoNotRetryIOException: hconnection-0x10e9f278 closed
- ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
- java.io.IOException: Connection reset by peer
- org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
... View more
05-21-2018
07:12 AM
I have solved it with the help of this article Working shiro_ini_content: # Sample LDAP configuration, for Active Directory user Authentication, currently tested for single Realm
[main]
ldapRealm=org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.systemUsername=cn=hadoop_srv,ou=hadoop,dc=datalonga,dc=com
ldapRealm.contextFactory.systemPassword=hadoop_srv_password
ldapRealm.contextFactory.authenticationMechanism=simple
ldapRealm.contextFactory.url=ldap://datalonga.ldap:389
# Ability to set ldap paging Size if needed; default is 100
ldapRealm.pagingSize=200
ldapRealm.authorizationEnabled=true
ldapRealm.searchBase=OU=hadoop,dc=datalonga,dc=com
ldapRealm.userSearchBase=dc=datalonga,dc=com
ldapRealm.groupSearchBase=OU=hadoop,dc=datalonga,dc=com
ldapRealm.userObjectClass=person
ldapRealm.groupObjectClass=group
ldapRealm.userSearchAttributeName = sAMAccountName
# Set search scopes for user and group. Values: subtree (default), onelevel, object
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.userSearchFilter=(&(objectclass=person)(sAMAccountName={0}))
ldapRealm.memberAttribute=member
# Format to parse & search group member values in 'memberAttribute'
ldapRealm.memberAttributeValueTemplate=CN={0},OU=hadoop,dc=datalonga,dc=com
# No need to give userDnTemplate if memberAttributeValueTemplate is provided
#ldapRealm.userDnTemplate=
# Map from physical AD groups to logical application roles
#ldapRealm.rolesByGroup = "hadoop_grp":admin_role,"hadoop":hadoop_users_role
# Force usernames returned from ldap to lowercase, useful for AD
ldapRealm.userLowerCase = true
# Enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operator
ldapRealm.groupSearchEnableMatchingRuleInChain = true
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
securityManager.realms = $ldapRealm
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
#/api/version = anon
/api/interpreter/** = authc, roles[admin_role,hadoop_users_role]
/api/configurations/** = authc, roles[admin_role]
/api/credential/** = authc, roles[admin_role,hadoop_users_role]
#/** = anon
/** = authc
... View more
05-10-2018
11:56 AM
@Erkan ŞİRİN Seeing your error above "kinit: Clock skew too great while getting initial credentials" Correct me if I am wrong I see on your sandbox date output translates to date 09/05/2018 and time 09:44 # date
Wed May 9 09:44:22 +03 2018 But on the screenshot of your Windows time attached translates to date 02/05/2018 and the time 09:44 that's is 7 days difference Please set your Windows 2012R2's date to the same date like the Sandbox its should work!! Please let me know
... View more
05-08-2018
10:48 AM
I solved it. I learned mysqld pid with: top then killed it kill 24678 I started mysqld : service start mysqld Then I retried from Ambari. It worked. The problem was: service mysqld stop couldn't stop mysqld.
... View more
03-07-2018
04:03 PM
And after Python 2.7 installation don't forget to change Zeppelin Spark interpreter setting as: zeppelin.pyspark.python python2.7
... View more
12-07-2017
01:44 PM
It is because the way how namenode works. It merges namespace repo (fsimage) periodically with edit logs file. To prevent namespace inconsistency it stops changing namespace by entering safemode. hdfs dfsadmin -safemode leave should work.
... View more