Member since
05-25-2016
26
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2026 | 11-09-2016 11:11 AM |
11-28-2017
08:58 AM
Hi all, I have the same problem, HDP 2.5 with Ranger, policies are only working when applied to users, not to groups where users and groups are managed with AD and SSSD on the Linux side. Athough all the users and groups are correctly mapped on ranger and on Linux, even the groups permissions are working fine with the Ranger encryption, but not with the policies. I tried all the suggestions like the lowercase conversion but still is not working for me. Any other idea? Thanks in advance.
... View more
06-20-2017
04:59 AM
Hi @Colton Rodgers I have the same problem than your. Please let me know if you find the solution. Thanks
... View more
12-13-2016
08:19 AM
Hi @bikas, ok I understood. So I should not worry about. Thanks
... View more
12-12-2016
11:54 AM
Hi @Kuldeep Kulkarni Yes, Resource manager HA is configured, but both are working fine, just rm1 is in standby mode and rm2 is active.
... View more
12-12-2016
10:28 AM
Hi all, I am using HDP 2.5. When I try to run a spark job or context (using a Jupyter notebook or pyspark shell), I always obtain the following error: WARN Client: Failed to connect to server: mycluster.at/111.11.11.11:8032: retries get failed due to exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy15.getNewApplication(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy16.getNewApplication(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:225)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:233)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:157)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Then the job is running fine, but that warning is always there. I have another cluster with HDP 2.4 and I don't see that warning. Any ideas? Thanks in advance,
... View more
Labels:
11-09-2016
11:11 AM
I solved the problem following this guide: https://www.ibm.com/support/knowledgecenter/SSPT3X_4.2.0/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kerb_activedir.html
... View more
11-09-2016
10:13 AM
Hi @Sagar Shimpi, yes the test connection was successful. And also I have the krb5.conf file at /etc/
... View more
11-09-2016
09:06 AM
1 Kudo
Hi all, I recently configured a cluster with HDP 2.5 and Ambari 2.4.1. Now I am trying to configure kerberos using an existing AD which is still used by another cluster with HDP 2.4 (I want to have both clusters running at the same time). I am following this guide: https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Ambari_Security_Guide/content/_launching_the_kerberos_wizard_automated_setup.html But I always obtain the following error when installing: Error message: Failed to connect to KDC - Failed to communicate with the Active Directory at ldap://192.168.0.2: simple bind failed: 192.168.0.2:389
Update the KDC settings in krb5-conf and kerberos-env configurations to correct this issue. Any idea? Thanks in advance
... View more
Labels:
11-02-2016
09:27 AM
Hi @dbaev, I would like to have the same scenario. 2 clusters but using the same AD and also with kerberos. How was your experience? Did you find any problems? Thanks,
... View more