Member since
12-08-2016
88
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3943 | 12-10-2016 03:00 AM |
01-22-2017
10:06 AM
After I "Enable Kerberos" I execute kinit, it reminds me to input password. But i do not set password for it.
... View more
Labels:
- Labels:
-
Kerberos
01-03-2017
05:28 AM
Oh, I modify /etc/ambari-server/conf/krb5JAASLogin.conf corresponding with yours, and it's ok now. Thank you.
... View more
01-03-2017
05:02 AM
After ambari-server setup-security the ambari-server.log is 03 Jan 2017 12:54:26,153 ERROR [main] KerberosChecker:115 - Unable to obtain password from user
03 Jan 2017 12:54:26,154 ERROR [main] AmbariServer:927 - Failed to run the Ambari Server
org.apache.ambari.server.AmbariException: Ambari Server Kerberos credentials check failed.
Check KDC availability and JAAS configuration in /etc/ambari-server/conf/krb5JAASLogin.conf
at org.apache.ambari.server.controller.utilities.KerberosChecker.checkJaasConfiguration(KerberosChecker.java:116)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:922)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Kerberos
-
Security
12-28-2016
10:19 AM
After I reinstall kerberos, it looks like ok and I start NameNode and install other service also good. But I can not find why it failed previous. Thank you for your help.
... View more
12-28-2016
10:10 AM
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 hdfs-hdpcluster@EXAMPLE.COM (des3-cbc-sha1) 1 hdfs-hdpcluster@EXAMPLE.COM (aes256-cts-hmac-sha1-96) 1 hdfs-hdpcluster@EXAMPLE.COM (arcfour-hmac) 1 hdfs-hdpcluster@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
1 hdfs-hdpcluster@EXAMPLE.COM (des-cbc-md5)
... View more
12-28-2016
10:09 AM
NameNode is in safe mode, and it can not up.
... View more
12-28-2016
09:17 AM
No, I install HDP and ambari a minute ago. After installed, I "Enable Kerberos" and I face this issue. HDP version: HDP-2.5.0.0 ambari version: Version
2.4.1.0 Of course, all service countered this issue. I see your reply answer in my another question. After I install JCE, I encouter 'App Timeline Server start failed'. The log is:
File"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 155,in<module> ApplicationTimelineServer().execute() File"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280,in execute method(env) File"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44,in start self.configure(env)# FOR SECURITY File"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55,in configure yarn(name='apptimelineserver') File"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89,in thunk return fn(*args,**kwargs) File"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 337,in yarn mode=0755 File"/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155,in __init__ self.env.run() File"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160,in run self.run_action(resource, action) File"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124,in run_action provider_action() File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459,in action_create_on_execute self.action_delayed("create") File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456,in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name,self) File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 247,in action_delayed self._assert_valid() File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 231,in _assert_valid self.target_status =self._get_file_status(target) File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 292,in _get_file_status list_status =self.util.run_command(target,'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False) File"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192,in run_command raiseFail(err_msg) resource_management.core.exceptions.Fail:Execution of 'curl -sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://bigdata013.example.com:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'' returned status_code=403. <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error403GSSException:Failure unspecified at GSS-API level (Mechanism level:Encryption type AES256 CTS mode with HMAC SHA1-96isnot supported/enabled)</title> </head> <body><h2>HTTP ERROR 403</h2> <p>Problem accessing /webhdfs/v1/ats/done.Reason: <pre>GSSException:Failure unspecified at GSS-API level (Mechanism level:Encryption type AES256 CTS mode with HMAC SHA1-96isnot supported/enabled)</pre></p><hr /><i><small>PoweredbyJetty://</small></i><br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> <br/> </body> </html>
... View more
12-28-2016
08:58 AM
Yes, I have installed JCE manually. And I execute "kinit" command to test ticket, the result is OK. I have a question that whether KDC and ambari-server are in the same host, is it OK?
... View more
12-28-2016
02:54 AM
16/12/28 10:45:26 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.setSafeMode over null. Not retrying because try once and fail. java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs-hdpcluster@EXAMPLE.COM to bigdata013.example.com/<ip-address>:8020; Host Details : local host is: "bigdata013.example.com/<ip-address>"; destination host is: "bigdata013.example.com":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:711)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2657)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1340)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1324)
at org.apache.hadoop.hdfs.tools.DFSAdmin.setSafeMode(DFSAdmin.java:611)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1916)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2107)
Caused by: java.io.IOException: Couldn't setup connection for hdfs-hdpcluster@EXAMPLE.COM to bigdata013.example.com/<ip-address>:8020
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:712)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
... 20 more
Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757)
... 23 more
safemode: Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs-hdpcluster@EXAMPLE.COM to bigdata013.example.com/<ip-address>:8020; Host Details : local host is: "bigdata013.example.com/<ip-address>"; destination host is: "bigdata013.example.com":8020;
16/12/28 10:45:40 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/12/28 10:45:43 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/12/28 10:45:44 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
16/12/28 10:45:48 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
... View more
Labels:
- Labels:
-
Apache Hadoop
12-10-2016
03:00 AM
Thank you all, and I have fix the bug in my program. Because I cutom my stack but I do not change the stack_advisor.py that corresponding with the stack.
... View more
- « Previous
- Next »