Support Questions
Find answers, ask questions, and share your expertise

I have a HA in my cluster but my second NN is stop and ZKF is on and if a start the NN02 then got a following error

Contributor
ambari-dashboard-image.jpgTraceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 103, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 155, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ansari3.hashmap.net.out
stdout: /var/lib/ambari-agent/data/output-2623.txt
2016-07-22 05:19:18,007 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.7-4
2016-07-22 05:19:18,008 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.7-4/0
2016-07-22 05:19:18,008 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-22 05:19:18,031 - call returned (1, '/etc/hadoop/2.3.4.7-4/0 exist already', '')
2016-07-22 05:19:18,031 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-22 05:19:18,054 - checked_call returned (0, '/usr/hdp/2.3.4.7-4/hadoop/conf -> /etc/hadoop/2.3.4.7-4/0')
2016-07-22 05:19:18,054 - Ensuring that hadoop has the correct symlink structure
2016-07-22 05:19:18,055 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-22 05:19:18,223 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.7-4
2016-07-22 05:19:18,224 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.7-4/0
2016-07-22 05:19:18,224 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-22 05:19:18,262 - call returned (1, '/etc/hadoop/2.3.4.7-4/0 exist already', '')
2016-07-22 05:19:18,263 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-22 05:19:18,301 - checked_call returned (0, '/usr/hdp/2.3.4.7-4/hadoop/conf -> /etc/hadoop/2.3.4.7-4/0')
2016-07-22 05:19:18,301 - Ensuring that hadoop has the correct symlink structure
2016-07-22 05:19:18,301 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-22 05:19:18,304 - Group['spark'] {}
2016-07-22 05:19:18,306 - Group['ranger'] {}
2016-07-22 05:19:18,307 - Group['hadoop'] {}
2016-07-22 05:19:18,307 - Group['users'] {}
2016-07-22 05:19:18,307 - Group['knox'] {}
2016-07-22 05:19:18,308 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,309 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,310 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-07-22 05:19:18,311 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-22 05:19:18,313 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,314 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-07-22 05:19:18,315 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,316 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,317 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,318 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,319 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,320 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,321 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,322 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,324 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-07-22 05:19:18,325 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-22 05:19:18,328 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-07-22 05:19:18,334 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-07-22 05:19:18,335 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-07-22 05:19:18,336 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-07-22 05:19:18,338 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-07-22 05:19:18,343 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-07-22 05:19:18,344 - Group['hdfs'] {}
2016-07-22 05:19:18,344 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-07-22 05:19:18,345 - FS Type: 
2016-07-22 05:19:18,346 - Directory['/etc/hadoop'] {'mode': 0755}
2016-07-22 05:19:18,378 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-22 05:19:18,380 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-07-22 05:19:18,398 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-07-22 05:19:18,407 - Skipping Execute[('setenforce', '0')] due to not_if
2016-07-22 05:19:18,408 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-07-22 05:19:18,412 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-07-22 05:19:18,413 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-07-22 05:19:18,421 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-07-22 05:19:18,424 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-07-22 05:19:18,426 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-22 05:19:18,455 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-07-22 05:19:18,457 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-07-22 05:19:18,458 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-07-22 05:19:18,468 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-07-22 05:19:18,473 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-07-22 05:19:18,709 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.7-4
2016-07-22 05:19:18,709 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.7-4/0
2016-07-22 05:19:18,710 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-22 05:19:18,746 - call returned (1, '/etc/hadoop/2.3.4.7-4/0 exist already', '')
2016-07-22 05:19:18,746 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-22 05:19:18,782 - checked_call returned (0, '/usr/hdp/2.3.4.7-4/hadoop/conf -> /etc/hadoop/2.3.4.7-4/0')
2016-07-22 05:19:18,782 - Ensuring that hadoop has the correct symlink structure
2016-07-22 05:19:18,783 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-22 05:19:18,785 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.7-4
2016-07-22 05:19:18,786 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.7-4/0
2016-07-22 05:19:18,786 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-07-22 05:19:18,822 - call returned (1, '/etc/hadoop/2.3.4.7-4/0 exist already', '')
2016-07-22 05:19:18,823 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.7-4 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-07-22 05:19:18,858 - checked_call returned (0, '/usr/hdp/2.3.4.7-4/hadoop/conf -> /etc/hadoop/2.3.4.7-4/0')
2016-07-22 05:19:18,859 - Ensuring that hadoop has the correct symlink structure
2016-07-22 05:19:18,859 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-07-22 05:19:18,875 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-07-22 05:19:18,886 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-07-22 05:19:18,887 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-07-22 05:19:18,907 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2016-07-22 05:19:18,908 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-07-22 05:19:18,925 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-07-22 05:19:18,943 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-07-22 05:19:18,943 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-07-22 05:19:18,955 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-07-22 05:19:18,958 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2016-07-22 05:19:18,976 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-07-22 05:19:18,976 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-07-22 05:19:18,988 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-07-22 05:19:19,005 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-07-22 05:19:19,006 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-07-22 05:19:19,019 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-07-22 05:19:19,036 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-07-22 05:19:19,037 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-07-22 05:19:19,109 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-07-22 05:19:19,120 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-07-22 05:19:19,120 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-07-22 05:19:19,147 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2016-07-22 05:19:19,149 - Directory['/data01/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-07-22 05:19:19,149 - Directory['/var/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'recursive': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2016-07-22 05:19:19,150 - Called service start with upgrade_type: None
2016-07-22 05:19:19,151 - HDFS: Setup ranger: command retry not enabled thus skipping if ranger admin is down !
2016-07-22 05:19:19,151 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://ansari1.hashmap.net:8080/resources//mysql-jdbc-driver.jar'), 'mode': 0644}
2016-07-22 05:19:19,151 - Not downloading the file from http://ansari1.hashmap.net:8080/resources//mysql-jdbc-driver.jar, because /var/lib/ambari-agent/tmp/mysql-jdbc-driver.jar already exists
2016-07-22 05:19:19,153 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/hadoop-client/lib/mysql-connector-java.jar')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2016-07-22 05:19:19,159 - File['/usr/hdp/current/hadoop-client/lib/mysql-connector-java.jar'] {'mode': 0644}
2016-07-22 05:19:19,183 - Rangeradmin: Skip ranger admin if it's down !
2016-07-22 05:19:19,434 - amb_ranger_admin user already exists.
2016-07-22 05:19:20,975 - Repository created Successfully
2016-07-22 05:19:22,224 - Policy updated Successfully
2016-07-22 05:19:22,225 - Ranger Repository created successfully and policies updated successfully providing ambari-qa user all permissions
2016-07-22 05:19:22,225 - Hdfs Repository created in Ranger admin
2016-07-22 05:19:22,227 - File['/usr/hdp/current/hadoop-client/conf/ranger-security.xml'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-22 05:19:22,228 - Writing File['/usr/hdp/current/hadoop-client/conf/ranger-security.xml'] because contents don't match
2016-07-22 05:19:22,229 - Directory['/etc/ranger/hdpground_hadoop'] {'owner': 'hdfs', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0775}
2016-07-22 05:19:22,229 - Creating directory Directory['/etc/ranger/hdpground_hadoop'] since it doesn't exist.
2016-07-22 05:19:22,240 - Changing owner for /etc/ranger/hdpground_hadoop from 0 to hdfs
2016-07-22 05:19:22,241 - Changing group for /etc/ranger/hdpground_hadoop from 0 to hadoop
2016-07-22 05:19:22,241 - Changing permission for /etc/ranger/hdpground_hadoop from 755 to 775
2016-07-22 05:19:22,241 - Directory['/etc/ranger/hdpground_hadoop/policycache'] {'owner': 'hdfs', 'recursive': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-07-22 05:19:22,242 - Creating directory Directory['/etc/ranger/hdpground_hadoop/policycache'] since it doesn't exist.
2016-07-22 05:19:22,242 - Changing owner for /etc/ranger/hdpground_hadoop/policycache from 0 to hdfs
2016-07-22 05:19:22,242 - Changing group for /etc/ranger/hdpground_hadoop/policycache from 0 to hadoop
2016-07-22 05:19:22,243 - Changing permission for /etc/ranger/hdpground_hadoop/policycache from 755 to 775
2016-07-22 05:19:22,243 - File['/etc/ranger/hdpground_hadoop/policycache/hdfs_hdpground_hadoop.json'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-07-22 05:19:22,244 - Writing File['/etc/ranger/hdpground_hadoop/policycache/hdfs_hdpground_hadoop.json'] because it doesn't exist
2016-07-22 05:19:22,244 - Changing owner for /etc/ranger/hdpground_hadoop/policycache/hdfs_hdpground_hadoop.json from 0 to hdfs
2016-07-22 05:19:22,244 - Changing group for /etc/ranger/hdpground_hadoop/policycache/hdfs_hdpground_hadoop.json from 0 to hadoop
2016-07-22 05:19:22,245 - XmlConfig['ranger-hdfs-audit.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-07-22 05:19:22,265 - Generating config: /usr/hdp/current/hadoop-client/conf/ranger-hdfs-audit.xml
2016-07-22 05:19:22,265 - File['/usr/hdp/current/hadoop-client/conf/ranger-hdfs-audit.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-07-22 05:19:22,293 - Writing File['/usr/hdp/current/hadoop-client/conf/ranger-hdfs-audit.xml'] because contents don't match
2016-07-22 05:19:22,294 - XmlConfig['ranger-hdfs-security.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-07-22 05:19:22,314 - Generating config: /usr/hdp/current/hadoop-client/conf/ranger-hdfs-security.xml
2016-07-22 05:19:22,314 - File['/usr/hdp/current/hadoop-client/conf/ranger-hdfs-security.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-07-22 05:19:22,328 - Writing File['/usr/hdp/current/hadoop-client/conf/ranger-hdfs-security.xml'] because contents don't match
2016-07-22 05:19:22,329 - XmlConfig['ranger-policymgr-ssl.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2016-07-22 05:19:22,348 - Generating config: /usr/hdp/current/hadoop-client/conf/ranger-policymgr-ssl.xml
2016-07-22 05:19:22,348 - File['/usr/hdp/current/hadoop-client/conf/ranger-policymgr-ssl.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-07-22 05:19:22,360 - Writing File['/usr/hdp/current/hadoop-client/conf/ranger-policymgr-ssl.xml'] because contents don't match
2016-07-22 05:19:22,361 - Execute[('/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/install/lib/*', '-f', '/etc/ranger/hdpground_hadoop/cred.jceks', '-k', 'auditDBCred', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_60'}, 'sudo': True}
Using Java:/usr/jdk64/jdk1.8.0_60/bin/java
Alias auditDBCred created successfully!
2016-07-22 05:19:24,285 - Execute[('/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/install/lib/*', '-f', '/etc/ranger/hdpground_hadoop/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_60'}, 'sudo': True}
Using Java:/usr/jdk64/jdk1.8.0_60/bin/java
Alias sslKeyStore created successfully!
2016-07-22 05:19:25,936 - Execute[('/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.3.4.7-4/ranger-hdfs-plugin/install/lib/*', '-f', '/etc/ranger/hdpground_hadoop/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_60'}, 'sudo': True}
Using Java:/usr/jdk64/jdk1.8.0_60/bin/java
Alias sslTrustStore created successfully!
2016-07-22 05:19:27,590 - File['/etc/ranger/hdpground_hadoop/cred.jceks'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0640}
2016-07-22 05:19:27,590 - Changing owner for /etc/ranger/hdpground_hadoop/cred.jceks from 0 to hdfs
2016-07-22 05:19:27,590 - Changing group for /etc/ranger/hdpground_hadoop/cred.jceks from 0 to hadoop
2016-07-22 05:19:27,591 - Changing permission for /etc/ranger/hdpground_hadoop/cred.jceks from 700 to 640
2016-07-22 05:19:27,593 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2016-07-22 05:19:27,594 - Options for start command are: 
2016-07-22 05:19:27,594 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2016-07-22 05:19:27,595 - Changing owner for /var/run/hadoop from 0 to hdfs
2016-07-22 05:19:27,595 - Changing group for /var/run/hadoop from 0 to hadoop
2016-07-22 05:19:27,595 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2016-07-22 05:19:27,595 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2016-07-22 05:19:27,596 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2016-07-22 05:19:27,605 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] 

2016-07-22 05:19:27,605 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}

3 REPLIES 3

@ANSARI FAHEEM AHMED

Please share /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ansari3.hashmap.net.out and /var/log/hadoop/hdfs/hadoop-hdfs-namenode-ansari3.hashmap.net.log

Contributor

logwindow.jpg this is log screen short of ansari3.hashmap.net.out

This looks good. Please share other log.