stderr: /var/lib/ambari-agent/data/errors-279.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 167, in DataNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 62, in start datanode(action="start") File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 72, in datanode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-whqlshdp01-n01.out JVMJ9VM007E Command-line option unrecognised: -Xloggc:/var/log/hadoop/hdfs/gc.log-201608121113 Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. stdout: /var/lib/ambari-agent/data/output-279.txt 2016-08-12 11:13:17,740 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258 2016-08-12 11:13:17,740 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0 2016-08-12 11:13:17,740 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-08-12 11:13:17,778 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '') 2016-08-12 11:13:17,778 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-08-12 11:13:17,817 - checked_call returned (0, '') 2016-08-12 11:13:17,817 - Ensuring that hadoop has the correct symlink structure 2016-08-12 11:13:17,817 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-08-12 11:13:17,906 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258 2016-08-12 11:13:17,906 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0 2016-08-12 11:13:17,907 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-08-12 11:13:17,939 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '') 2016-08-12 11:13:17,940 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-08-12 11:13:17,976 - checked_call returned (0, '') 2016-08-12 11:13:17,976 - Ensuring that hadoop has the correct symlink structure 2016-08-12 11:13:17,976 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-08-12 11:13:17,977 - Group['hadoop'] {} 2016-08-12 11:13:17,979 - Group['users'] {} 2016-08-12 11:13:17,979 - Group['knox'] {} 2016-08-12 11:13:17,980 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:17,981 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']} 2016-08-12 11:13:17,982 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:17,982 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']} 2016-08-12 11:13:17,983 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:17,983 - Modifying user hdfs 2016-08-12 11:13:18,001 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:18,003 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:18,004 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:18,005 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2016-08-12 11:13:18,007 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-08-12 11:13:18,011 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-08-12 11:13:18,021 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-08-12 11:13:18,021 - Group['hdfs'] {} 2016-08-12 11:13:18,022 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs', 'hdfs']} 2016-08-12 11:13:18,023 - Modifying user hdfs 2016-08-12 11:13:18,045 - FS Type: 2016-08-12 11:13:18,046 - Directory['/etc/hadoop'] {'mode': 0755} 2016-08-12 11:13:18,063 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-08-12 11:13:18,064 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-08-12 11:13:18,079 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-08-12 11:13:18,088 - Skipping Execute[('setenforce', '0')] due to not_if 2016-08-12 11:13:18,089 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-08-12 11:13:18,094 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-08-12 11:13:18,096 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-08-12 11:13:18,103 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-08-12 11:13:18,105 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-08-12 11:13:18,105 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-08-12 11:13:18,118 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2016-08-12 11:13:18,118 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-08-12 11:13:18,119 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-08-12 11:13:18,124 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-08-12 11:13:18,131 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-08-12 11:13:18,273 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258 2016-08-12 11:13:18,273 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0 2016-08-12 11:13:18,273 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-08-12 11:13:18,309 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '') 2016-08-12 11:13:18,309 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-08-12 11:13:18,348 - checked_call returned (0, '') 2016-08-12 11:13:18,348 - Ensuring that hadoop has the correct symlink structure 2016-08-12 11:13:18,348 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-08-12 11:13:18,353 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258 2016-08-12 11:13:18,353 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0 2016-08-12 11:13:18,353 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-08-12 11:13:18,388 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '') 2016-08-12 11:13:18,389 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-08-12 11:13:18,428 - checked_call returned (0, '') 2016-08-12 11:13:18,428 - Ensuring that hadoop has the correct symlink structure 2016-08-12 11:13:18,428 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-08-12 11:13:18,436 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True} 2016-08-12 11:13:18,444 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2016-08-12 11:13:18,445 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-08-12 11:13:18,455 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml 2016-08-12 11:13:18,455 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,463 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-08-12 11:13:18,471 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2016-08-12 11:13:18,472 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,477 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-08-12 11:13:18,478 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2016-08-12 11:13:18,486 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2016-08-12 11:13:18,486 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,492 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-08-12 11:13:18,501 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2016-08-12 11:13:18,501 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,507 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-08-12 11:13:18,515 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2016-08-12 11:13:18,515 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,556 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-08-12 11:13:18,564 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2016-08-12 11:13:18,564 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-08-12 11:13:18,579 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'} 2016-08-12 11:13:18,580 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0751, 'recursive': True} 2016-08-12 11:13:18,587 - Host contains mounts: ['/', '/proc', '/sys', '/sys/kernel/debug', '/dev', '/dev/shm', '/dev/pts', '/boot', '/dzl41', '/home', '/home/redwood', '/opt', '/opt/splunk', '/storix', '/tmp', '/usr', '/var', '/sys/fs/fuse/connections', '/sys/kernel/security', '/proc/sys/fs/binfmt_misc', '/hadoop', '/snapshottable']. 2016-08-12 11:13:18,588 - Mount point for directory /hadoop/hadoop/hdfs/data is /hadoop 2016-08-12 11:13:18,589 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each DFS data dir.\n# It is safe to delete, since it will get regenerated the next time that the DataNode starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a DFS data dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# data_dir,mount_point\n/hadoop/hadoop/hdfs/data,/hadoop\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-08-12 11:13:18,592 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755} 2016-08-12 11:13:18,592 - Changing owner for /var/run/hadoop from 0 to hdfs 2016-08-12 11:13:18,592 - Changing group for /var/run/hadoop from 0 to hadoop 2016-08-12 11:13:18,592 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True} 2016-08-12 11:13:18,592 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True} 2016-08-12 11:13:18,593 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'} 2016-08-12 11:13:18,613 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] 2016-08-12 11:13:18,614 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}