Member since
02-29-2016
108
Posts
213
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2009 | 08-18-2017 02:09 PM | |
3508 | 06-16-2017 08:04 PM | |
3250 | 01-20-2017 03:36 AM | |
8807 | 01-04-2017 03:06 AM | |
4395 | 12-09-2016 08:27 PM |
01-03-2017
03:32 AM
2 Kudos
So I updated Zeppelin interpreter setting based on the feedback Now I am getting can't start Spark error %livy
sc.version
Cannot start spark.
... View more
12-30-2016
03:59 AM
5 Kudos
HDP 2.5.3 cluster kerberos with MIT KDC, openldap as the directory service. Ranger and Atlas are installed and working properly. Zeppelin installed on node5, livy server installed on node1 Enabled ldap for user authentication with following shiro config. Login with ldap user is working fine. [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
#admin = password1
#user1 = password2, role1, role2
#user2 = password3, role3
#user3 = password4, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
#activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = CN=Administrator,CN=Users,DC=HW,DC=EXAMPLE,DC=COM
#activeDirectoryRealm.systemPassword = Password1!
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://user/zeppelin/zeppelin.jceks
#activeDirectoryRealm.searchBase = CN=Users,DC=HW,DC=TEST,DC=COM
#activeDirectoryRealm.url = ldap://ad-nano.test.example.com:389
#activeDirectoryRealm.groupRolesMap = ""
#activeDirectoryRealm.authorizationCachingEnabled = true
ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm
ldapRealm.userDnTemplate = uid={0},ou=Users,dc=field,dc=hortonworks,dc=com
ldapRealm.contextFactory.url = ldap://node5:389
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/** = anon
/** = authc
Then try to config the livy on Zeppelin with the following setting. please note the kerberos keytab is copied from node1, and principal have hostname from node1 as well However, I keep getting the Connection refused error on Zeppelin while running sc.verison
%livy
sc.version
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:189)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:173)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:338)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:105)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:262)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Does the error mean ssl must be enabled?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
12-28-2016
08:36 PM
2 Kudos
make sure you have the kafka policy create in Ranger, it is not created automatically as part of Atlas installation and need to be done manually. ATLAS_HOOK topic:
User: atlas, Privileges: Consume, Create Group: public, Privileges: Publish, Create ATLAS_ENTITIES topic: User: atlas, Privileges: Publish, Create Group: public, Privileges: Consume, Create Also check HBase policy and make sure you see 2 policies for atlas-titan and ATLAS_ENTITY_AUDIT_EVENTS table (these should get create automatically). Once the policies are in place, hbase, kafka, ranger and atlas. You should have it in Ranger.
... View more
12-23-2016
04:51 PM
3 Kudos
There was a failure during "Configure Ambari Identity", but retry passed. So I thought it was not really a problem. I am sure the sudo rule is the problem. Will try again and let you know the outcome. Update the sudo permission and got another error: "you must have a tty to run sudo". this turns out to be related to sudo setting, use visudo to comment requiretty fixed the problem visudo
#Defaults requiretty
... View more
12-23-2016
02:35 AM
2 Kudos
I reproduced the same problem again. 1. change ambari to run with non-root before Kerberos, (also change it to run HTTPS, encrypt password for ambari and sync ldap user) 2. Kerberos wizard with MIT KDC In log I found 23 Dec 2016 01:45:51,007 INFO [Server Action Executor Worker 333] CreateKeytabFilesServerAction:193 - Creating keytab file for ambari-server@FIELD.HORTONWORKS.COM on host ambari_server
So look like the process did try to create the keytab But under /etc/security/keytab, there is no ambari.server.keytab, I also try find any warning or error in ambari-server.log that indicates anything went wrong, but see nothing related. And also include the cmd for creating the non-root user for ambari-server, ambari-agent is still running under root user useradd -d /var/lib/ambari-server -G hadoop -M -r -s /sbin/nologin ambari-user
echo 'ambari-user ALL=(ALL) NOPASSWD:SETENV: /bin/mkdir, /bin/cp, /bin/chmod, /bin/rm' > /etc/sudoers.d/ambari-server
... View more
12-22-2016
09:56 PM
1 Kudo
If the non-root user missing permission, it still get all the keytab for other users copied and chown done. Not sure why it only failed on Ambari itself. Will try do this again and look at the log to see if there is any abnormally.
... View more
12-22-2016
09:10 PM
1 Kudo
look at the note on 2.5.3 doc, "If you performed the Automated Kerberos Setup, these steps are performed automatically (and therefore, you do not need to perform the steps below)."
... View more
12-22-2016
06:46 PM
3 Kudos
Ambari 2.4.2.0 and HDP 2.5.3.0-37 Running Kerberos wizard to Kerberize the cluster, all HDP components are Kerberized successfully, but Ambari itself is not Kerberized. I can see the ambari-server principle being added to MIT KDC, but there is no keytab for that principle under /etc/security/keytab on Ambari server. This cause all the views not working. I did the manual step to Kerberize Ambari and everything is fine afterward. I am pretty sure back in the last version of Ambari, it was Kerberized by the wizard. Not sure if this is a change of behavior or some other things caused it. Ambari server was running under ambari-server rather than root account before Kerberos wizard, not sure if that change anything.
... View more
Labels:
- Labels:
-
Apache Ambari
12-22-2016
04:19 PM
@Madhan Neethiraj That makes sense. Thanks for you answer
... View more
12-13-2016
11:19 PM
why is Oozie needed? I don't have it installed in the cluster. Don't see Atlas have dependency on oozie. Other services are up running and all restarted successfully.
... View more