Member since
08-17-2016
19
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12034 | 04-18-2017 10:01 AM | |
3580 | 03-01-2017 08:27 AM |
07-19-2018
10:01 AM
Thanks for your replay. I used root account to connect oozie db. this account has fully privileges to execute everything in db. The strange thing is oozie works fine without any problem.
... View more
07-19-2018
09:59 AM
Thanks for your information. Yes I totally agree with what your said. And I have executed the command before installation. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '***' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY '***' WITH GRANT OPTION;
flush privileges; The problem still exits. But the function is ok.
... View more
07-19-2018
09:56 AM
Thanks for your rely. Actually, using root account to connect the oozie database with full privileges. I have try to connect the database by command line. Everything is ok~ And check the DB configuration to verify whether the tables are generated during the installation. Everything is fine too. Checking the service of oozie and submit the job from hue. Thing is ok too~ I don't know why this error is occured in the log file.
... View more
07-19-2018
01:55 AM
Using ambari service check, everything is ok. but when add smtp mail server, it will throw some error information.
... View more
07-19-2018
01:47 AM
After install HDP2.6.3 and Ambari2.6.2.2, In oozie-error.log, I found some strange information about database connection problem. ----------------- 2018-07-19 09:29:17,244 ERROR SchemaCheckXCommand:517 - SERVER[hadoopS4] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] An Exception occured while talking to the database: Access denied for user 'root'@'hadoopS4' (using password: YES)
java.sql.SQLException: Access denied for user 'root'@'hadoopS4' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:871)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1686)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1207)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2254)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.oozie.command.SchemaCheckXCommand.execute(SchemaCheckXCommand.java:88)
at org.apache.oozie.command.SchemaCheckXCommand.execute(SchemaCheckXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:287)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) ----------------- I am sure below things are fine: 1. jdbc-mysql.jar has been added. 2. use command could connect oozie database from oozie server. "mysql -uroot -pxxxx -hhadoops4". 3. Install oozie from ambari. the db connection is fine. 4. All of oozie tables were created automatically during oozie installation.
... View more
Labels:
04-18-2017
10:01 AM
Finally this problem is solved by change the hostname format. the hostname could not contain "_" tag.
... View more
04-12-2017
01:00 AM
Thanks for your suggestion. But this problem still exist after changing the configuration as your answer. hosts file like below: eastchina2_ops_exactdata1 172.xx.xxx.4 172.xx.xxx.4 eastchina2_ops_exactdata1
... View more
04-11-2017
02:34 PM
I can ping the hostname get the right result. bug get below information: [root@xxxx hdfs]# nslookup eastchina2_ops_exactdata1 Server: xx.xx.2.136 Address: xx.xx.2.136#53 ** server can't find eastchina2_ops_exactdata1: NXDOMAIN This is cloud server, this hostname could not be added into the dns sever.
... View more
04-11-2017
02:34 PM
I have configured the hosts files in each node and /etc/sysconfig/network file as document info. But this problem still exist.
... View more
04-11-2017
02:34 PM
This problem is occurred on start the services after installation. The HDP version is :2.4.2 The Ambar version is :2.4.1 Below is log information when I start the service log: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 174, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 61, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 269, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out stdout: /var/lib/ambari-agent/data/output-218.txt 2017-04-11 18:22:46,406 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-11 18:22:46,408 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-11 18:22:46,410 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-11 18:22:46,442 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-11 18:22:46,442 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-11 18:22:46,473 - checked_call returned (0, '')
2017-04-11 18:22:46,474 - Ensuring that hadoop has the correct symlink structure
2017-04-11 18:22:46,474 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-11 18:22:46,612 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-11 18:22:46,614 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-11 18:22:46,616 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-11 18:22:46,648 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-11 18:22:46,648 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-11 18:22:46,679 - checked_call returned (0, '')
2017-04-11 18:22:46,680 - Ensuring that hadoop has the correct symlink structure
2017-04-11 18:22:46,680 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-11 18:22:46,681 - Group['hadoop'] {}
2017-04-11 18:22:46,682 - Group['users'] {}
2017-04-11 18:22:46,683 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-11 18:22:46,683 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,684 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,685 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-11 18:22:46,685 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,686 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-04-11 18:22:46,686 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,687 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,687 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,688 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,688 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-04-11 18:22:46,689 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-04-11 18:22:46,691 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-04-11 18:22:46,702 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-04-11 18:22:46,703 - Group['hdfs'] {}
2017-04-11 18:22:46,703 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-04-11 18:22:46,704 - FS Type:
2017-04-11 18:22:46,704 - Directory['/etc/hadoop'] {'mode': 0755}
2017-04-11 18:22:46,717 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-11 18:22:46,717 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-04-11 18:22:46,732 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-04-11 18:22:46,744 - Skipping Execute[('setenforce', '0')] due to not_if
2017-04-11 18:22:46,745 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-04-11 18:22:46,747 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-04-11 18:22:46,747 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-04-11 18:22:46,751 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-04-11 18:22:46,753 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-04-11 18:22:46,753 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-04-11 18:22:46,765 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-11 18:22:46,765 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-04-11 18:22:46,766 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-04-11 18:22:46,770 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-04-11 18:22:46,781 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-04-11 18:22:46,953 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-11 18:22:46,955 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-11 18:22:46,957 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-11 18:22:46,988 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-11 18:22:46,988 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-11 18:22:47,019 - checked_call returned (0, '')
2017-04-11 18:22:47,020 - Ensuring that hadoop has the correct symlink structure
2017-04-11 18:22:47,020 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-11 18:22:47,022 - Stack Feature Version Info: stack_version=2.4, version=2.4.2.0-258, current_cluster_version=2.4.2.0-258 -> 2.4.2.0-258
2017-04-11 18:22:47,037 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2017-04-11 18:22:47,039 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2017-04-11 18:22:47,041 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-11 18:22:47,072 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2017-04-11 18:22:47,072 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.2.0-258', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-11 18:22:47,103 - checked_call returned (0, '')
2017-04-11 18:22:47,104 - Ensuring that hadoop has the correct symlink structure
2017-04-11 18:22:47,104 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-11 18:22:47,115 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-04-11 18:22:47,152 - checked_call returned (0, '2.4.2.0-258', '')
2017-04-11 18:22:47,156 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-04-11 18:22:47,162 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-04-11 18:22:47,163 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-11 18:22:47,172 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-04-11 18:22:47,172 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,181 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-11 18:22:47,189 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-04-11 18:22:47,189 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,195 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-04-11 18:22:47,196 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-04-11 18:22:47,203 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-04-11 18:22:47,203 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,209 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-04-11 18:22:47,217 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-04-11 18:22:47,217 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,223 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-04-11 18:22:47,231 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-04-11 18:22:47,231 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,273 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-04-11 18:22:47,280 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-04-11 18:22:47,281 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-04-11 18:22:47,302 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-04-11 18:22:47,303 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2017-04-11 18:22:47,303 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2017-04-11 18:22:47,308 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/memory', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/mqueue', '/sys/kernel/debug', '/dev/hugepages', '/data', '/run/user/1001', '/run/user/0'].
2017-04-11 18:22:47,308 - Mount point for directory /data/hadoop/hdfs/data is /data
2017-04-11 18:22:47,308 - Mount point for directory /data/hadoop/hdfs/data is /data
2017-04-11 18:22:47,308 - Forcefully ensuring existence and permissions of the directory: /data/hadoop/hdfs/data
2017-04-11 18:22:47,308 - Directory['/data/hadoop/hdfs/data'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'}
2017-04-11 18:22:47,309 - Changing permission for /data/hadoop/hdfs/data from 750 to 755
2017-04-11 18:22:47,312 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/memory', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/mqueue', '/sys/kernel/debug', '/dev/hugepages', '/data', '/run/user/1001', '/run/user/0'].
2017-04-11 18:22:47,312 - Mount point for directory /data/hadoop/hdfs/data is /data
2017-04-11 18:22:47,313 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/data/hadoop/hdfs/data,/data\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-04-11 18:22:47,314 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-04-11 18:22:47,314 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-04-11 18:22:47,314 - Changing group for /var/run/hadoop from 0 to hadoop
2017-04-11 18:22:47,315 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-04-11 18:22:47,315 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-04-11 18:22:47,315 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2017-04-11 18:22:47,331 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2017-04-11 18:22:47,332 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2017-04-11 18:22:51,429 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201704111625 <==
2017-04-11T16:25:51.805+0800: 1.744: [GC2017-04-11T16:25:51.805+0800: 1.744: [ParNew: 96945K->8851K(118016K), 0.0520950 secs] 96945K->8851K(1035520K), 0.0522430 secs] [Times: user=0.09 sys=0.01, real=0.05 secs]
Heap
par new generation total 118016K, used 23569K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e5f830, 0x00000000b6680000)
from space 13056K, 67% used [0x00000000b7340000, 0x00000000b7be4f18, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201704111822 <==
2017-04-11T18:22:49.922+0800: 2.475: [GC2017-04-11T18:22:49.922+0800: 2.475: [ParNew: 163840K->13354K(184320K), 0.0331160 secs] 163840K->13354K(1028096K), 0.0332570 secs] [Times: user=0.05 sys=0.01, real=0.03 secs]
Heap
par new generation total 184320K, used 63274K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 30% used [0x00000000b0000000, 0x00000000b30bfe10, 0x00000000ba000000)
from space 20480K, 65% used [0x00000000bb400000, 0x00000000bc10aa18, 0x00000000bc800000)
to space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 23241K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201704111544 <==
2017-04-11T15:44:43.619+0800: 3.792: [GC2017-04-11T15:44:43.619+0800: 3.792: [ParNew: 163840K->13549K(184320K), 0.0390570 secs] 163840K->13549K(1028096K), 0.0392160 secs] [Times: user=0.05 sys=0.02, real=0.04 secs]
Heap
par new generation total 184320K, used 59399K [0x00000000b0000000, 0x00000000bc800000, 0x00000000bc800000)
eden space 163840K, 27% used [0x00000000b0000000, 0x00000000b2cc6858, 0x00000000ba000000)
from space 20480K, 66% used [0x00000000bb400000, 0x00000000bc13b770, 0x00000000bc800000)
to space 20480K, 0% used [0x00000000ba000000, 0x00000000ba000000, 0x00000000bb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000bc800000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 23238K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201704111811 <==
2017-04-11T18:11:28.888+0800: 1.595: [GC2017-04-11T18:11:28.888+0800: 1.595: [ParNew: 96944K->8843K(118016K), 0.0690840 secs] 96944K->8843K(1035520K), 0.0692120 secs] [Times: user=0.12 sys=0.01, real=0.07 secs]
Heap
par new generation total 118016K, used 23565K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e60598, 0x00000000b6680000)
from space 13056K, 67% used [0x00000000b7340000, 0x00000000b7be2f18, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/gc.log-201704111610 <==
2017-04-11T16:10:32.513+0800: 1.692: [GC2017-04-11T16:10:32.513+0800: 1.692: [ParNew: 96917K->8856K(118016K), 0.0526200 secs] 96917K->8856K(1035520K), 0.0527410 secs] [Times: user=0.10 sys=0.00, real=0.05 secs]
Heap
par new generation total 118016K, used 23574K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e5fa78, 0x00000000b6680000)
from space 13056K, 67% used [0x00000000b7340000, 0x00000000b7be6100, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out.2 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201704111619 <==
2017-04-11T16:19:55.931+0800: 1.586: [GC2017-04-11T16:19:55.931+0800: 1.586: [ParNew: 96925K->8852K(118016K), 0.0323970 secs] 96925K->8852K(1035520K), 0.0325450 secs] [Times: user=0.06 sys=0.00, real=0.04 secs]
Heap
par new generation total 118016K, used 23574K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e60730, 0x00000000b6680000)
from space 13056K, 67% used [0x00000000b7340000, 0x00000000b7be51d8, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out.1 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201704111558 <==
2017-04-11T15:59:00.978+0800: 1.901: [GC2017-04-11T15:59:00.979+0800: 1.901: [ParNew: 96952K->8852K(118016K), 0.0288730 secs] 96952K->8852K(1035520K), 0.0290180 secs] [Times: user=0.05 sys=0.01, real=0.03 secs]
Heap
par new generation total 118016K, used 23570K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e5f790, 0x00000000b6680000)
from space 13056K, 67% used [0x00000000b7340000, 0x00000000b7be5208, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out.5 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201704111804 <==
2017-04-11T18:04:19.626+0800: 1.730: [GC2017-04-11T18:04:19.626+0800: 1.730: [ParNew: 96917K->8881K(118016K), 0.0264270 secs] 96917K->8881K(1035520K), 0.0265560 secs] [Times: user=0.04 sys=0.01, real=0.03 secs]
Heap
par new generation total 118016K, used 23603K [0x00000000b0000000, 0x00000000b8000000, 0x00000000b8000000)
eden space 104960K, 14% used [0x00000000b0000000, 0x00000000b0e607d0, 0x00000000b6680000)
from space 13056K, 68% used [0x00000000b7340000, 0x00000000b7bec4e8, 0x00000000b8000000)
to space 13056K, 0% used [0x00000000b6680000, 0x00000000b6680000, 0x00000000b7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000b8000000, 0x00000000f0000000, 0x00000000f0000000)
concurrent-mark-sweep perm gen total 131072K, used 12223K [0x00000000f0000000, 0x00000000f8000000, 0x0000000100000000)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.log <==
2017-04-11 18:22:49,621 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-04-11 18:22:49,631 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(294)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-04-11 18:22:49,636 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined
2017-04-11 18:22:49,642 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(710)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-04-11 18:22:49,653 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-04-11 18:22:49,654 INFO http.HttpServer2 (HttpServer2.java:addFilter(693)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-04-11 18:22:49,654 INFO http.HttpServer2 (HttpServer2.java:addFilter(693)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-04-11 18:22:49,654 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2017-04-11 18:22:49,676 INFO http.HttpServer2 (HttpServer2.java:openListeners(915)) - Jetty bound to port 45688
2017-04-11 18:22:49,676 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2017-04-11 18:22:50,006 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45688
2017-04-11 18:22:50,211 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(223)) - Listening HTTP traffic on /0.0.0.0:50075
2017-04-11 18:22:50,214 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2017-04-11 18:22:50,328 INFO datanode.DataNode (DataNode.java:startDataNode(1147)) - dnUserName = hdfs
2017-04-11 18:22:50,328 INFO datanode.DataNode (DataNode.java:startDataNode(1148)) - supergroup = hdfs
2017-04-11 18:22:50,367 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(57)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
2017-04-11 18:22:50,394 INFO ipc.Server (Server.java:run(715)) - Starting Socket Reader #1 for port 8010
2017-04-11 18:22:50,421 INFO datanode.DataNode (DataNode.java:initIpcServer(839)) - Opened IPC server at /0.0.0.0:8010
2017-04-11 18:22:50,435 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(152)) - Refresh request received for nameservices: null
2017-04-11 18:22:50,453 ERROR datanode.DataNode (DataNode.java:secureMain(2545)) - Exception in secureMain
java.lang.IllegalArgumentException: Does not contain a valid host:port authority: eastchina2_ops_exactdata1:8020
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:213)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:687)
at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNsIds(DFSUtil.java:655)
at org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:872)
at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:155)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1155)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:432)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2423)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562)
2017-04-11 18:22:50,454 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-04-11 18:22:50,455 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at eastchina2_ops_exactdata1/172.19.16.4
************************************************************/
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out.3 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-eastchina2_ops_exactdata1.out <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63478
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Command failed after 1 tries
... View more