Member since
04-08-2016
25
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14929 | 07-18-2016 03:42 PM |
11-30-2016
03:10 PM
Hi @Avijeet Dash The problem is I have Nifi only on my Laptop and not on my cluster, How does Nifi on my local mac gets access to the conf files? Here is the error...
... View more
11-30-2016
03:08 PM
@Greg Keys I tried to change it but it didn't work. look at the comment I left above.
... View more
11-30-2016
05:59 AM
I want to stream data into HDFS. I have Nifi running on my laptop. I have files that I want to transfer to HDFS on HDP on AWS. what is the correct process? I was trying to use the getFile process and the putHDFS to move it. But the PutHDFS I couldn't get to work. I dont have Kerberos and how do I log into HDFS?
... View more
Labels:
- Labels:
-
Apache NiFi
11-29-2016
07:41 PM
@Scott Shaw It's still not working...
... View more
11-29-2016
05:11 PM
hive-error.txt I installed HDP 2.5 and tried to run hive but i'm getting the error attached. Then after that I did run the following command to give it permission but still not working... what should I do? sudo -u hdfs hdfs dfs -mkdir /user/ec2-user sudo -u hdfs hdfs dfs -chown -R ec2-user:hdfs /user/ec2-user
... View more
Labels:
- Labels:
-
Apache Hive
11-23-2016
07:11 PM
Hi @Brandon Wilson, Thank you!
... View more
11-23-2016
05:18 PM
After shutdown of servers the data nodes unmount and I get this error stderr:
2016-11-23 14:37:57,967 -
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
Directory /hadoopfs/fs1/hdfs/datanode became unmounted from /hadoopfs/fs1 . Current mount point: / . Directory /hadoopfs/fs2/hdfs/datanode became unmounted from /hadoopfs/fs2 . Current mount point: / . Please ensure that mounts are healthy. If the mount change was intentional, you can update the contents of /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist.
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 174, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 61, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 269, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out
stdout:
2016-11-23 14:37:57,213 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 14:37:57,214 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 14:37:57,214 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 14:37:57,235 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 14:37:57,235 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 14:37:57,255 - checked_call returned (0, '')
2016-11-23 14:37:57,255 - Ensuring that hadoop has the correct symlink structure
2016-11-23 14:37:57,255 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 14:37:57,379 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 14:37:57,379 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 14:37:57,380 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 14:37:57,400 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 14:37:57,400 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 14:37:57,420 - checked_call returned (0, '')
2016-11-23 14:37:57,421 - Ensuring that hadoop has the correct symlink structure
2016-11-23 14:37:57,421 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 14:37:57,422 - Group['hadoop'] {}
2016-11-23 14:37:57,423 - Group['users'] {}
2016-11-23 14:37:57,424 - Group['spark'] {}
2016-11-23 14:37:57,424 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,424 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,425 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,425 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 14:37:57,426 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,427 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 14:37:57,427 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,428 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,428 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,429 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,429 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 14:37:57,430 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-23 14:37:57,431 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-23 14:37:57,435 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-23 14:37:57,435 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-23 14:37:57,436 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-23 14:37:57,437 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-23 14:37:57,440 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-23 14:37:57,440 - Group['hdfs'] {}
2016-11-23 14:37:57,440 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-11-23 14:37:57,441 - FS Type:
2016-11-23 14:37:57,441 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-23 14:37:57,456 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 14:37:57,456 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-23 14:37:57,468 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-23 14:37:57,472 - Skipping Execute[('setenforce', '0')] due to not_if
2016-11-23 14:37:57,473 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-23 14:37:57,474 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-23 14:37:57,475 - Changing owner for /var/run/hadoop from 508 to root
2016-11-23 14:37:57,475 - Changing group for /var/run/hadoop from 503 to root
2016-11-23 14:37:57,475 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-23 14:37:57,479 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-11-23 14:37:57,481 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-11-23 14:37:57,482 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-23 14:37:57,495 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 14:37:57,495 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-23 14:37:57,497 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 14:37:57,501 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-23 14:37:57,504 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-23 14:37:57,685 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 14:37:57,685 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 14:37:57,686 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 14:37:57,706 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 14:37:57,706 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 14:37:57,727 - checked_call returned (0, '')
2016-11-23 14:37:57,727 - Ensuring that hadoop has the correct symlink structure
2016-11-23 14:37:57,728 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 14:37:57,729 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-23 14:37:57,731 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 14:37:57,731 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 14:37:57,731 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 14:37:57,751 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 14:37:57,751 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 14:37:57,771 - checked_call returned (0, '')
2016-11-23 14:37:57,772 - Ensuring that hadoop has the correct symlink structure
2016-11-23 14:37:57,772 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 14:37:57,778 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2016-11-23 14:37:57,801 - checked_call returned (0, '2.5.0.0-1245', '')
2016-11-23 14:37:57,805 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2016-11-23 14:37:57,811 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-11-23 14:37:57,812 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-23 14:37:57,822 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2016-11-23 14:37:57,823 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,831 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-23 14:37:57,840 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-11-23 14:37:57,840 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,846 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2016-11-23 14:37:57,846 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2016-11-23 14:37:57,855 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-11-23 14:37:57,855 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,861 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-23 14:37:57,869 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-11-23 14:37:57,869 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,875 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {'final': {'dfs.datanode.failed.volumes.tolerated': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.support.append': 'true', 'dfs.webhdfs.enabled': 'true'}}, 'configurations': ...}
2016-11-23 14:37:57,884 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-11-23 14:37:57,884 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,927 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2016-11-23 14:37:57,935 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-11-23 14:37:57,935 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-11-23 14:37:57,959 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2016-11-23 14:37:57,961 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2016-11-23 14:37:57,961 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2016-11-23 14:37:57,964 - Host contains mounts: ['/proc', '/sys', '/', '/dev', '/dev/pts', '/dev/shm', '/proc/sys/fs/binfmt_misc'].
2016-11-23 14:37:57,964 - Mount point for directory /hadoopfs/fs1/hdfs/datanode is /
2016-11-23 14:37:57,964 - Directory /hadoopfs/fs1/hdfs/datanode became unmounted from /hadoopfs/fs1 . Current mount point: / .
2016-11-23 14:37:57,965 - Mount point for directory /hadoopfs/fs2/hdfs/datanode is /
2016-11-23 14:37:57,965 - Directory /hadoopfs/fs2/hdfs/datanode became unmounted from /hadoopfs/fs2 . Current mount point: / .
2016-11-23 14:37:57,967 - Host contains mounts: ['/proc', '/sys', '/', '/dev', '/dev/pts', '/dev/shm', '/proc/sys/fs/binfmt_misc'].
2016-11-23 14:37:57,967 -
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
Directory /hadoopfs/fs1/hdfs/datanode became unmounted from /hadoopfs/fs1 . Current mount point: / . Directory /hadoopfs/fs2/hdfs/datanode became unmounted from /hadoopfs/fs2 . Current mount point: / . Please ensure that mounts are healthy. If the mount change was intentional, you can update the contents of /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist.
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
2016-11-23 14:37:57,968 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/hadoopfs/fs1/hdfs/datanode,/hadoopfs/fs1\n/hadoopfs/fs2/hdfs/datanode,/hadoopfs/fs2\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-23 14:37:57,969 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2016-11-23 14:37:57,969 - Changing owner for /var/run/hadoop from 0 to hdfs
2016-11-23 14:37:57,969 - Changing group for /var/run/hadoop from 0 to hadoop
2016-11-23 14:37:57,970 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2016-11-23 14:37:57,970 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2016-11-23 14:37:57,970 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2016-11-23 14:37:57,977 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2016-11-23 14:37:57,977 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2016-11-23 14:38:02,037 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out.5 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201611211702 <==
2016-11-21T21:04:52.157+0000: 14542.730: [GC2016-11-21T21:04:52.157+0000: 14542.730: [ParNew: 166166K->3490K(184320K), 0.0061870 secs] 175603K->12970K(1028096K), 0.0062860 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-21T22:09:57.667+0000: 18448.240: [GC2016-11-21T22:09:57.668+0000: 18448.240: [ParNew: 167330K->1563K(184320K), 0.0055350 secs] 176810K->11047K(1028096K), 0.0056220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-21T23:07:31.170+0000: 21901.743: [GC2016-11-21T23:07:31.170+0000: 21901.743: [ParNew: 165403K->2098K(184320K), 0.0054260 secs] 174887K->12135K(1028096K), 0.0055120 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-22T00:11:31.862+0000: 25742.434: [GC2016-11-22T00:11:31.862+0000: 25742.434: [ParNew: 165938K->2272K(184320K), 0.0043610 secs] 175975K->12337K(1028096K), 0.0044360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-22T01:04:12.388+0000: 28902.960: [GC2016-11-22T01:04:12.388+0000: 28902.960: [ParNew: 166112K->3356K(184320K), 0.0047310 secs] 176177K->13502K(1028096K), 0.0048500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-22T02:04:01.698+0000: 32492.271: [GC2016-11-22T02:04:01.698+0000: 32492.271: [ParNew: 167196K->3023K(184320K), 0.0042870 secs] 177342K->13179K(1028096K), 0.0043850 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-22T03:13:01.719+0000: 36632.291: [GC2016-11-22T03:13:01.719+0000: 36632.291: [ParNew: 166863K->1551K(184320K), 0.0041630 secs] 177019K->11740K(1028096K), 0.0042590 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T04:16:11.165+0000: 40421.738: [GC2016-11-22T04:16:11.165+0000: 40421.738: [ParNew: 165391K->3437K(184320K), 0.0049510 secs] 175580K->13647K(1028096K), 0.0050430 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T05:10:22.619+0000: 43673.192: [GC2016-11-22T05:10:22.619+0000: 43673.192: [ParNew: 167277K->3095K(184320K), 0.0043240 secs] 177487K->13315K(1028096K), 0.0044160 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T06:14:21.734+0000: 47512.307: [GC2016-11-22T06:14:21.734+0000: 47512.307: [ParNew: 166935K->1890K(184320K), 0.0040030 secs] 177155K->12122K(1028096K), 0.0040920 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-22T07:09:31.787+0000: 50822.360: [GC2016-11-22T07:09:31.787+0000: 50822.360: [ParNew: 165730K->3477K(184320K), 0.0043660 secs] 175962K->13732K(1028096K), 0.0044580 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-22T08:19:11.991+0000: 55002.564: [GC2016-11-22T08:19:11.991+0000: 55002.564: [ParNew: 167317K->801K(184320K), 0.0037820 secs] 177572K->11068K(1028096K), 0.0038690 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-22T09:28:12.673+0000: 59143.245: [GC2016-11-22T09:28:12.673+0000: 59143.245: [ParNew: 164641K->744K(184320K), 0.0036110 secs] 174908K->11013K(1028096K), 0.0036950 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-22T10:36:11.161+0000: 63221.734: [GC2016-11-22T10:36:11.161+0000: 63221.734: [ParNew: 164584K->808K(184320K), 0.0038280 secs] 174853K->11086K(1028096K), 0.0039210 secs] [Times: user=0.01 sys=0.01, real=0.01 secs]
2016-11-22T11:44:01.722+0000: 67292.294: [GC2016-11-22T11:44:01.722+0000: 67292.294: [ParNew: 164648K->798K(184320K), 0.0038170 secs] 174926K->11093K(1028096K), 0.0039030 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-22T12:49:49.635+0000: 71240.208: [GC2016-11-22T12:49:49.635+0000: 71240.208: [ParNew: 164638K->704K(184320K), 0.0035240 secs] 174933K->11015K(1028096K), 0.0036170 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T13:57:23.106+0000: 75293.679: [GC2016-11-22T13:57:23.107+0000: 75293.679: [ParNew: 164544K->644K(184320K), 0.0035160 secs] 174855K->11034K(1028096K), 0.0035990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T15:03:01.728+0000: 79232.300: [GC2016-11-22T15:03:01.728+0000: 79232.300: [ParNew: 164484K->1268K(184320K), 0.0037900 secs] 174874K->11659K(1028096K), 0.0038790 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T15:59:01.931+0000: 82592.504: [GC2016-11-22T15:59:01.931+0000: 82592.504: [ParNew: 165108K->2462K(184320K), 0.0039180 secs] 175499K->12853K(1028096K), 0.0039920 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-22T17:00:41.326+0000: 86291.899: [GC2016-11-22T17:00:41.326+0000: 86291.899: [ParNew: 166302K->2193K(184320K), 0.0097560 secs] 176693K->12585K(1028096K), 0.0098640 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-22T18:05:41.340+0000: 90191.913: [GC2016-11-22T18:05:41.340+0000: 90191.913: [ParNew: 166033K->1246K(184320K), 0.0057410 secs] 176425K->11638K(1028096K), 0.0058260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-22T19:03:37.952+0000: 93668.524: [GC2016-11-22T19:03:37.952+0000: 93668.524: [ParNew: 165086K->2299K(184320K), 0.0039490 secs] 175478K->12691K(1028096K), 0.0040400 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-22T19:57:31.998+0000: 96902.571: [GC2016-11-22T19:57:31.999+0000: 96902.571: [ParNew: 166139K->2581K(184320K), 0.0046020 secs] 176531K->12974K(1028096K), 0.0047090 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T21:02:41.329+0000: 100811.902: [GC2016-11-22T21:02:41.329+0000: 100811.902: [ParNew: 166403K->1224K(184320K), 0.0053900 secs] 176796K->11618K(1028096K), 0.0054850 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-22T22:11:02.035+0000: 104912.607: [GC2016-11-22T22:11:02.035+0000: 104912.608: [ParNew: 165064K->762K(184320K), 0.0047510 secs] 175458K->11160K(1028096K), 0.0048310 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2016-11-22T23:13:53.628+0000: 108684.200: [GC2016-11-22T23:13:53.628+0000: 108684.200: [ParNew: 164602K->1939K(184320K), 0.0039460 secs] 175000K->12343K(1028096K), 0.0040390 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-22T23:53:09.707+0000: 111040.279: [GC2016-11-22T23:53:09.707+0000: 111040.279: [ParNew: 165779K->6475K(184320K), 0.0049720 secs] 176183K->16881K(1028096K), 0.0050680 secs] [Times: user=0.01 sys=0.01, real=0.00 secs]
2016-11-23T00:39:37.378+0000: 113827.951: [GC2016-11-23T00:39:37.378+0000: 113827.951: [ParNew: 170315K->6100K(184320K), 0.0049640 secs] 180721K->16519K(1028096K), 0.0050740 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2016-11-23T01:32:51.171+0000: 117021.744: [GC2016-11-23T01:32:51.171+0000: 117021.744: [ParNew: 169940K->3884K(184320K), 0.0042920 secs] 180359K->14317K(1028096K), 0.0043760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-23T02:19:31.162+0000: 119821.734: [GC2016-11-23T02:19:31.162+0000: 119821.734: [ParNew: 167724K->6511K(184320K), 0.0047890 secs] 178157K->16945K(1028096K), 0.0048810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-23T03:07:12.248+0000: 122682.820: [GC2016-11-23T03:07:12.248+0000: 122682.820: [ParNew: 170351K->5368K(184320K), 0.0053940 secs] 180785K->15805K(1028096K), 0.0054970 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2016-11-23T04:18:41.329+0000: 126971.902: [GC2016-11-23T04:18:41.329+0000: 126971.902: [ParNew: 169208K->977K(184320K), 0.0064180 secs] 179645K->11416K(1028096K), 0.0064970 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2016-11-23T05:28:47.965+0000: 131178.537: [GC2016-11-23T05:28:47.965+0000: 131178.537: [ParNew: 164817K->967K(184320K), 0.0041370 secs] 175256K->11414K(1028096K), 0.0042250 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
Heap
par new generation total 184320K, used 67749K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 40% used [0x00000000b0000000, 0x00000000b41375f0, 0x00000000ba000000)
from space 20480K, 4% used [0x00000000ba000000, 0x00000000ba0f1fe0, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 10446K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 40387K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201611231430 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(15060448k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 184320K, used 108311K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 66% used [0x00000000b0000000, 0x00000000b69c5c00, 0x00000000ba000000)
from space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 14021K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201611231429 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(15118720k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 184320K, used 108295K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 66% used [0x00000000b0000000, 0x00000000b69c1e90, 0x00000000ba000000)
from space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 14012K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201611231434 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(14744240k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 184320K, used 108310K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 66% used [0x00000000b0000000, 0x00000000b69c58c8, 0x00000000ba000000)
from space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 14019K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201611231437 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(14739724k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Heap
par new generation total 184320K, used 108295K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 66% used [0x00000000b0000000, 0x00000000b69c1ca8, 0x00000000ba000000)
from space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 14021K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
2016-11-23 02:10:46,450 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:10:46,457 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:11:46,454 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:11:46,458 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:12:46,443 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:16:46,443 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:18:46,459 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:19:46,438 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:19:46,456 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:19:46,456 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:20:46,443 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:20:46,444 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:20:46,445 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:22:46,439 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:23:46,455 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:24:46,437 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:24:46,447 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:25:46,449 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:26:46,437 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:27:46,459 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:28:46,438 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:28:46,449 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:29:46,437 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:31:46,440 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:32:46,456 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:33:46,438 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:34:46,445 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:35:46,435 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:36:46,437 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:39:46,435 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:41:46,447 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:42:46,435 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:43:46,438 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:43:46,447 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:44:46,437 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:45:46,436 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:46:46,438 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 02:46:46,451 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 06:33:25,344 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2016-11-23 06:34:25,322 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out.3 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out.1 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.out.2 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 60086
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-10-0-1-104.log <==
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2479)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2521)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2647)
2016-11-23 14:37:59,749 WARN datanode.DataNode (DataNode.java:checkStorageLocations(2524)) - Invalid dfs.datanode.data.dir /hadoopfs/fs2/hdfs/datanode :
java.io.FileNotFoundException: File file:/hadoopfs/fs2/hdfs/datanode does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2479)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2521)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2647)
2016-11-23 14:37:59,749 ERROR datanode.DataNode (DataNode.java:secureMain(2630)) - Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/hadoopfs/fs1/hdfs/datanode" "/hadoopfs/fs2/hdfs/datanode"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2530)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2503)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2395)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2442)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2647)
2016-11-23 14:37:59,751 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-11-23 14:37:59,753 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ip-10-0-1-104.us-west-2.compute.internal/10.0.1.104
************************************************************/
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/gc.log-201611230628 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(10532916k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2016-11-23T06:28:20.553+0000: 2.298: [GC2016-11-23T06:28:20.553+0000: 2.299: [ParNew: 163840K->13323K(184320K), 0.0193080 secs] 163840K->13323K(1028096K), 0.0194190 secs] [Times: user=0.05 sys=0.00, real=0.02 secs]
2016-11-23T06:29:40.990+0000: 82.735: [GC2016-11-23T06:29:40.990+0000: 82.735: [ParNew: 177163K->14141K(184320K), 0.0336240 secs] 177163K->18391K(1028096K), 0.0337210 secs] [Times: user=0.10 sys=0.01, real=0.03 secs]
Heap
par new generation total 184320K, used 86126K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 43% used [0x00000000b0000000, 0x00000000b464c658, 0x00000000ba000000)
from space 20480K, 69% used [0x00000000ba000000, 0x00000000badcf498, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 4250K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 34364K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201611230552 <==
OpenJDK 64-Bit Server VM (24.121-b00) for linux-amd64 JRE (1.7.0_121-b00), built on Nov 18 2016 00:22:36 by "mockbuild" with gcc 4.8.3 20140911 (Red Hat 4.8.3-9)
Memory: 4k page, physical 15403944k(9388860k free), swap 0k(0k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxPermSize=268435456 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:PermSize=134217728 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2016-11-23T05:52:31.904+0000: 2.287: [GC2016-11-23T05:52:31.904+0000: 2.287: [ParNew: 163840K->13341K(184320K), 0.0192840 secs] 163840K->13341K(1028096K), 0.0193830 secs] [Times: user=0.04 sys=0.02, real=0.02 secs]
2016-11-23T05:54:12.618+0000: 103.001: [GC2016-11-23T05:54:12.618+0000: 103.001: [ParNew: 177181K->13401K(184320K), 0.0376900 secs] 177181K->17652K(1028096K), 0.0378070 secs] [Times: user=0.10 sys=0.01, real=0.04 secs]
Heap
par new generation total 184320K, used 137569K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 75% used [0x00000000b0000000, 0x00000000b7941db8, 0x00000000ba000000)
from space 20480K, 65% used [0x00000000ba000000, 0x00000000bad16658, 0x00000000bb400000)
to space 20480K, 0% used [0x00000000bb400000, 0x00000000bb400000, 0x00000000bc800000)
concurrent mark-sweep generation total 843776K, used 4250K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 32915K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Hadoop
07-25-2016
03:18 PM
I want to make sure all duplicate values in a certain column get the same primary key assigned to them.. the zipWithIndex doesn't guarantee that
... View more
07-25-2016
03:18 PM
I want to make sure all duplicate values in a certian column get the same primary key assigned to them.. the zipWithIndex doesn't gurentee that
... View more
07-25-2016
02:40 PM
What is the best way to assign a sequence number (surrogate key) in pyspark on a table in hive that will be inserted into all the time from various data sources after transformations..... This key will be used as a primary key.. Can I use the accumulator or is there a better way?
... View more
Labels:
- Labels:
-
Apache Spark