Good afternoon guys, I am having problems with Kerberos again, I have already re-created my lab cluster, I no longer know what to do. The problem is the following I was able to put the ambari in the domain, all the options of the kerberos executed in the correct way. But when Hadoop will "Kerberize" the services, they stop working and do not go up. I ran rollback of the server settings, and returned the services at the time of implementation. I need help to run Kerberus correctly, I do not know what else to do, someone could help me in detail I'm still new to the Hadoop solution, and i have hadoop 126.96.36.199 and Ambari 188.8.131.52 .
Rui Ornellas Junior
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/184.108.40.206.0/package/scripts/datanode.py", line 161, in <module> DataNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 850, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/HDFS/220.127.116.11.0/package/scripts/datanode.py", line 67, in start datanode(action="start") File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/18.104.22.168.0/package/scripts/hdfs_datanode.py", line 68, in datanode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/22.214.171.124.0/package/scripts/utils.py", line 274, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hadoop-server01.cetax.corp.out
I am not sure I understand your setup.
Are you trying to integrate MIT KDC with AD ?
What documentation did you follow for the kerberos setup?
Can you describe your cluster setup?
Number of node and their distribution (Master,Slave etc)
Helpful to have the HDP/Ambari versions too
What option of kerberos setup did you execute correctly as stated in your initial posting? Please help me understand to help you better