Member since
03-16-2020
13
Posts
0
Kudos Received
0
Solutions
06-28-2021
10:54 PM
Hello Shifu , Thanks for your response. We tried all the possibilities in Ambari 2.7.4 cluster but it did not give the output below 4 - 5 second from a simple query of managed table ORC file format. It will be great, if you elaborate more please. Thanks,
... View more
06-23-2021
07:31 AM
Dear All, Can Map Reduce2 or TEZ can provide output less than 4 second? Before going to detail explanation let me give the environment version first. HDFS -3.1.1.3.1 YARN - 3.1.1 MapReduce2 - 3.1.1 Tez - 0.9.1 Hive - 3.1.0 Data is in ORC file format and assume that H/W infrastructure is enough. Can we expect output from any data query less than five second please? Consider that table has been organized as best optimum way. Thanks in advance for your analysis.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
10-28-2020
11:15 PM
Hello GangWar, Thanks for your response. But the issue is not related to Kerberos connection. It's four node Ambari (2.7.4) cluster. Python connection is working fine from RHEL environment. But same connection is not working from Wnidows (Anaconda) environment. Hope this is clear of our issue. Appreciate to help on this regards. Thanks, KK
... View more
10-23-2020
05:38 AM
Trust you are doing fine!! We came across an issue with anaconda to connect Hive environment. Hive version is 3.1.0 of Ambari 2.7.4 which is a multi node cluster. Python connection into Hive is working fine from RHEL server. But same environment is not getting connected via Anaconda from Windows10. Conda version is 4.9.0. Please find the exact error below D:\ProgramData\Anaconda3\lib\site-packages\thrift_sasl\__init__.py in open(self) 83 if not ret: 84 raise TTransportException(type=TTransportException.NOT_OPEN, ---> 85 message=("Could not start SASL: %s" % self.sasl.getError())) 86 87 # Send initial response TTransportException: Could not start SASL: b'Error in sasl_client_start (-4) SASL(-4): no mechanism available: Unable to find a callback: 2' Python Code ============ import pandas as pd import sasl from pyhive import hive con = hive.Connection(host="X.X.X.X", port=10000) cur=con.cursor() sq_str = " select from table" df = pd.read_sql(sq_str,conn) print(df) Appreciate your quick help please. Thanks
... View more
Labels:
08-24-2020
01:01 AM
Hello Madhur, Thanks for your response. But, we can see the same issue in Ambari 2.7.4 which we have installed just last weeks to overcome this issue. Can you please help on this regards? Thanks, KK
... View more
08-24-2020
12:58 AM
Dear All, Trust you are doing fine. We have four nodes Ambari 2.7.4 environment. DNS registration is giving error to start the service. Please find complete log. Can you please help on this regards? stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn_registry_dns.py", line 93, in RegistryDNS().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn_registry_dns.py", line 53, in start service('registrydns',action='start') File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/service.py", line 110, in service Execute(daemon_cmd, not_if=check_process, environment=hadoop_env_exports) File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/3.1.4.0-315/hadoop-yarn/bin/yarn --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon start registrydns' returned 1. ERROR: Cannot set priority of registrydns process 16836 stdout: 2020-08-24 13:16:25,865 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-08-24 13:16:25,884 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-08-24 13:16:26,056 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-08-24 13:16:26,061 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-08-24 13:16:26,063 - Group['livy'] {} 2020-08-24 13:16:26,064 - Group['spark'] {} 2020-08-24 13:16:26,064 - Group['hdfs'] {} 2020-08-24 13:16:26,065 - Group['hadoop'] {} 2020-08-24 13:16:26,065 - Group['users'] {} 2020-08-24 13:16:26,065 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,067 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,068 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,069 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,069 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-08-24 13:16:26,071 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2020-08-24 13:16:26,072 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2020-08-24 13:16:26,072 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-08-24 13:16:26,073 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,074 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2020-08-24 13:16:26,076 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,076 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,077 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,078 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-08-24 13:16:26,079 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-08-24 13:16:26,081 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2020-08-24 13:16:26,088 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2020-08-24 13:16:26,088 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2020-08-24 13:16:26,089 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-08-24 13:16:26,091 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-08-24 13:16:26,092 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2020-08-24 13:16:26,103 - call returned (0, '1017') 2020-08-24 13:16:26,104 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2020-08-24 13:16:26,110 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if 2020-08-24 13:16:26,110 - Group['hdfs'] {} 2020-08-24 13:16:26,111 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2020-08-24 13:16:26,111 - FS Type: HDFS 2020-08-24 13:16:26,112 - Directory['/etc/hadoop'] {'mode': 0755} 2020-08-24 13:16:26,128 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2020-08-24 13:16:26,129 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2020-08-24 13:16:26,147 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2020-08-24 13:16:26,156 - Skipping Execute[('setenforce', '0')] due to not_if 2020-08-24 13:16:26,157 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2020-08-24 13:16:26,159 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2020-08-24 13:16:26,160 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2020-08-24 13:16:26,160 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2020-08-24 13:16:26,164 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2020-08-24 13:16:26,165 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2020-08-24 13:16:26,171 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2020-08-24 13:16:26,181 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2020-08-24 13:16:26,181 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2020-08-24 13:16:26,183 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2020-08-24 13:16:26,186 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2020-08-24 13:16:26,191 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2020-08-24 13:16:26,196 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari 2020-08-24 13:16:26,504 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-08-24 13:16:26,505 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315 2020-08-24 13:16:26,548 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf 2020-08-24 13:16:26,565 - Directory['/var/log/hadoop-yarn'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0775, 'owner': 'yarn'} 2020-08-24 13:16:26,567 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:26,567 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:26,568 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2020-08-24 13:16:26,568 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:26,569 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:26,569 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:26,570 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2020-08-24 13:16:26,570 - Directory['/usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase'] {'owner': 'yarn-ats', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2020-08-24 13:16:26,579 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2020-08-24 13:16:26,580 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json 2020-08-24 13:16:26,580 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json'] {'content': Template('input.config-yarn.json.j2'), 'mode': 0644} 2020-08-24 13:16:26,581 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:26,590 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/core-site.xml 2020-08-24 13:16:26,590 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,619 - Writing File['/usr/hdp/3.1.4.0-315/hadoop/conf/core-site.xml'] because contents don't match 2020-08-24 13:16:26,620 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:26,628 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/hdfs-site.xml 2020-08-24 13:16:26,628 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,673 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2020-08-24 13:16:26,680 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml 2020-08-24 13:16:26,680 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,719 - Changing owner for /usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml from 986 to yarn 2020-08-24 13:16:26,719 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'hadoop.registry.dns.bind-port': u'true'}}, 'owner': 'yarn', 'configurations': ...} 2020-08-24 13:16:26,727 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/yarn-site.xml 2020-08-24 13:16:26,727 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,847 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2020-08-24 13:16:26,854 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml 2020-08-24 13:16:26,855 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,869 - Changing owner for /usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml from 1009 to yarn 2020-08-24 13:16:26,869 - XmlConfig['hbase-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn-ats', 'configurations': ...} 2020-08-24 13:16:26,876 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml 2020-08-24 13:16:26,877 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml'] {'owner': 'yarn-ats', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,910 - XmlConfig['resource-types.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': {u'yarn.resource-types.yarn.io_gpu.maximum-allocation': u'8', u'yarn.resource-types': u''}} 2020-08-24 13:16:26,917 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/resource-types.xml 2020-08-24 13:16:26,918 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/resource-types.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,921 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2020-08-24 13:16:26,923 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2020-08-24 13:16:26,931 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2020-08-24 13:16:26,932 - File['/usr/hdp/3.1.4.0-315/hadoop-yarn/bin/container-executor'] {'group': 'hadoop', 'mode': 02050} 2020-08-24 13:16:26,936 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/container-executor.cfg'] {'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644} 2020-08-24 13:16:26,937 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2020-08-24 13:16:26,940 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755} 2020-08-24 13:16:26,941 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2020-08-24 13:16:26,943 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2020-08-24 13:16:26,944 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'mapred', 'configurations': ...} 2020-08-24 13:16:26,951 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml 2020-08-24 13:16:26,951 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:26,991 - Changing owner for /usr/hdp/3.1.4.0-315/hadoop/conf/mapred-site.xml from 987 to mapred 2020-08-24 13:16:26,991 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:26,998 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml 2020-08-24 13:16:26,999 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:27,013 - Changing owner for /usr/hdp/3.1.4.0-315/hadoop/conf/capacity-scheduler.xml from 987 to hdfs 2020-08-24 13:16:27,013 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:27,020 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/ssl-client.xml 2020-08-24 13:16:27,021 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:27,026 - Directory['/usr/hdp/3.1.4.0-315/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2020-08-24 13:16:27,027 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:27,035 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/secure/ssl-client.xml 2020-08-24 13:16:27,035 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:27,041 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.4.0-315/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2020-08-24 13:16:27,048 - Generating config: /usr/hdp/3.1.4.0-315/hadoop/conf/ssl-server.xml 2020-08-24 13:16:27,048 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-08-24 13:16:27,055 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2020-08-24 13:16:27,055 - File['/usr/hdp/3.1.4.0-315/hadoop/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2020-08-24 13:16:27,056 - checking any existing dns pid file = '/var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid' dns user 'root' 2020-08-24 13:16:27,057 - Execute['ambari-sudo.sh -H -E /usr/hdp/3.1.4.0-315/hadoop-yarn/bin/yarn --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon stop registrydns'] {'environment': {'HADOOP_SECURE_PID_DIR': u'/var/run/hadoop-yarn/yarn', 'HADOOP_SECURE_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.4.0-315/hadoop/libexec', 'HADOOP_PID_DIR': u'/var/run/hadoop-yarn/yarn'}, 'only_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid'} 2020-08-24 13:16:27,063 - Skipping Execute['ambari-sudo.sh -H -E /usr/hdp/3.1.4.0-315/hadoop-yarn/bin/yarn --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon stop registrydns'] due to only_if 2020-08-24 13:16:27,064 - call['! ( ambari-sudo.sh -H -E test -f /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid )'] {'tries': 5, 'try_sleep': 5, 'env': {'HADOOP_SECURE_PID_DIR': u'/var/run/hadoop-yarn/yarn', 'HADOOP_SECURE_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.4.0-315/hadoop/libexec', 'HADOOP_PID_DIR': u'/var/run/hadoop-yarn/yarn'}} 2020-08-24 13:16:27,071 - call returned (0, '') 2020-08-24 13:16:27,071 - File['/var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid'] {'action': ['delete']} 2020-08-24 13:16:27,071 - checking any existing dns pid file = '/var/run/hadoop-yarn/yarn/hadoop-yarn-registrydns.pid' dns user 'yarn' 2020-08-24 13:16:27,072 - Execute['ambari-sudo.sh su yarn -l -s /bin/bash -c '/usr/hdp/3.1.4.0-315/hadoop-yarn/bin/yarn --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon stop registrydns''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.4.0-315/hadoop/libexec'}} 2020-08-24 13:16:27,214 - Execute['ambari-sudo.sh -H -E /usr/hdp/3.1.4.0-315/hadoop-yarn/bin/yarn --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon start registrydns'] {'environment': {'HADOOP_SECURE_PID_DIR': u'/var/run/hadoop-yarn/yarn', 'HADOOP_SECURE_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LOG_DIR': u'/var/log/hadoop-yarn/yarn', 'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.4.0-315/hadoop/libexec', 'HADOOP_PID_DIR': u'/var/run/hadoop-yarn/yarn'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop-yarn/yarn/hadoop-yarn-root-registrydns.pid'} 2020-08-24 13:16:34,375 - Execute['find /var/log/hadoop-yarn/yarn -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'root'} ==> /var/log/hadoop-yarn/yarn/hadoop-yarn-root-registrydns-namenode.optix.com.log <== at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more 2020-08-24 13:16:27,852 INFO dns.PrivilegedRegistryDNSStarter (LogAdapter.java:info(51)) - STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting RegistryDNSServer STARTUP_MSG: host = namenode.optix.com/10.180.40.62 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.1.1.3.1.4.0-315 STARTUP_MSG: classpath = /usr/hdp/3.1.4.0-315/hadoop/conf:/usr/hdp/3.1.4.0-315/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/animal-sniffer-annotations-1.17.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/checker-qual-2.8.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/error_prone_annotations-2.3.2.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/failureaccess-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/guava-28.0-jre.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/j2objc-annotations-1.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-annotations-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-core-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-databind-2.9.9.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.4.0-315/hadoop/lib/zookeeper-3.4.6.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//azure-data-lake-store-sdk-2.3.3.jar:/usr/hdp/3.1.4.0-315/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-annotations-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-auth-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-azure-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-common-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-kms-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-nfs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/./:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/animal-sniffer-annotations-1.17.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/checker-qual-2.8.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/error_prone_annotations-2.3.2.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/failureaccess-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/guava-28.0-jre.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/j2objc-annotations-1.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-annotations-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-core-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-databind-2.9.9.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-hdfs/.//hadoop-hdfs.jar::/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.375.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.3.3.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.4.0-315-shaded.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.4.0-315-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.4.0-315/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/./:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/bcpkix-jdk15on-1.60.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/bcprov-jdk15on-1.60.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/jackson-jaxrs-base-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.9.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.4.0-315/tez/hadoop-shim-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/hadoop-shim-2.8-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-api-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-common-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-dag-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-examples-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-history-parser-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-javadoc-tools-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-job-analyzer-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-mapreduce-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-protobuf-history-plugin-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-runtime-internals-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-runtime-library-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-tests-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.4.0-315/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.4.0-315/tez/lib/gcs-connector-1.9.10.3.1.4.0-315-shaded.jar:/usr/hdp/3.1.4.0-315/tez/lib/guava-28.0-jre.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-aws-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-azure-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-azure-datalake-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.4.0-315/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.4.0-315/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/tez/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.4.0-315/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.4.0-315/tez/conf:/usr/hdp/3.1.4.0-315/tez/conf_llap:/usr/hdp/3.1.4.0-315/tez/doc:/usr/hdp/3.1.4.0-315/tez/hadoop-shim-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/hadoop-shim-2.8-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib:/usr/hdp/3.1.4.0-315/tez/man:/usr/hdp/3.1.4.0-315/tez/tez-api-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-common-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-dag-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-examples-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-history-parser-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-javadoc-tools-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-job-analyzer-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-mapreduce-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-protobuf-history-plugin-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-runtime-internals-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-runtime-library-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-tests-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/ui:/usr/hdp/3.1.4.0-315/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.4.0-315/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.4.0-315/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.4.0-315/tez/lib/gcs-connector-1.9.10.3.1.4.0-315-shaded.jar:/usr/hdp/3.1.4.0-315/tez/lib/guava-28.0-jre.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-aws-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-azure-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-azure-datalake-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.4.0-315.jar:/usr/hdp/3.1.4.0-315/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.4.0-315/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.4.0-315/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.4.0-315/tez/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/tez/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.4.0-315/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.4.0-315/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.4.0-315/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.4.0-315/tez/lib/tez.tar.gz STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 58d0fd3d8ce58b10149da3c717c45e5e57a60d14; compiled by 'jenkins' on 2019-08-23T05:15Z STARTUP_MSG: java = 1.8.0_222-ea ************************************************************/ 2020-08-24 13:16:27,864 INFO dns.PrivilegedRegistryDNSStarter (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT] 2020-08-24 13:16:28,039 INFO dns.RegistryDNS (RegistryDNS.java:initializeChannels(195)) - Opening TCP and UDP channels on /0.0.0.0 port 53 2020-08-24 13:16:28,047 ERROR dns.PrivilegedRegistryDNSStarter (PrivilegedRegistryDNSStarter.java:init(61)) - Error initializing Registry DNS java.net.BindException: Problem binding to [namenode.optix.com:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1016) at org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(RegistryDNS.java:925) at org.apache.hadoop.registry.server.dns.RegistryDNS.initializeChannels(RegistryDNS.java:196) at org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter.init(PrivilegedRegistryDNSStarter.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.out.3 <== core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 91623 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.out.2 <== core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 91623 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.out.1 <== core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 91623 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.out <== core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 91623 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop-yarn/yarn/hadoop-yarn-root-registrydns-namenode.optix.com.out.3 <== ==> /var/log/hadoop-yarn/yarn/hadoop-yarn-root-registrydns-namenode.optix.com.out.2 <== ==> /var/log/hadoop-yarn/yarn/hadoop-yarn-root-registrydns-namenode.optix.com.out.1 <== ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.err.3 <== java.net.BindException: Problem binding to [namenode.optix.com:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1016) at org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(RegistryDNS.java:925) at org.apache.hadoop.registry.server.dns.RegistryDNS.initializeChannels(RegistryDNS.java:196) at org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter.init(PrivilegedRegistryDNSStarter.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more Cannot load daemon Service exit with a return value of 3 ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.err.2 <== java.net.BindException: Problem binding to [namenode.optix.com:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1016) at org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(RegistryDNS.java:925) at org.apache.hadoop.registry.server.dns.RegistryDNS.initializeChannels(RegistryDNS.java:196) at org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter.init(PrivilegedRegistryDNSStarter.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more Cannot load daemon Service exit with a return value of 3 ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.err.1 <== java.net.BindException: Problem binding to [namenode.optix.com:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1016) at org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(RegistryDNS.java:925) at org.apache.hadoop.registry.server.dns.RegistryDNS.initializeChannels(RegistryDNS.java:196) at org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter.init(PrivilegedRegistryDNSStarter.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more Cannot load daemon Service exit with a return value of 3 ==> /var/log/hadoop-yarn/yarn/hadoop-yarn-root-registrydns-namenode.optix.com.out <== ==> /var/log/hadoop-yarn/yarn/privileged-root-registrydns-namenode.optix.com.err <== java.net.BindException: Problem binding to [namenode.optix.com:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1016) at org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(RegistryDNS.java:925) at org.apache.hadoop.registry.server.dns.RegistryDNS.initializeChannels(RegistryDNS.java:196) at org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter.init(PrivilegedRegistryDNSStarter.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:691) at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91) at org.apache.hadoop.registry.server.dns.RegistryDNS.openUDPChannel(RegistryDNS.java:1014) ... 8 more Cannot load daemon Service exit with a return value of 3 Command failed after 1 tries
... View more
Labels:
08-19-2020
11:07 PM
Trust you are doing fine!! We have Ambari cluster with version ambari-2.5.1.0, HDP-2.6.4.0, HDP-UTILS-1.1.0.21 into RHEL 7.7 OS. It’s a four nodes cluster. We have noticed one issue that HiveServer2 authentication is not listing NOSASL option. Can you please suggest on this
... View more
Labels:
08-19-2020
10:53 PM
Hello Asish, Thanks for your help. It's working now with concurrent users. Thanks, KK
... View more
08-10-2020
09:26 AM
We have a user interface based on Java. This application connects Hive as data source. Please find the product version of Apache Hive below. Application is working fine when single user has login with the application to access data from Hive. It gives error while concurrent users login to same application and access the Hive data.. apache-hive-2.3.7 db-derby-10.14.2.0 hadoop-2.10.0 Further investigation – The authentication mechanism of Hive is NOSASL. We did a small mutli threaded java application for debugging. Please find Java code and log for your analysis. Thanks in advance for your help!! import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; public class MapReduceThreadClass implements Runnable { String name; Thread t; String query; MapReduceThreadClass (String thread,String query){ name = thread; this.query=query; t = new Thread(this, name); System.out.println("New thread: " + t); t.start(); } @Override public void run() { Connection conn = null; try { Class.forName("org.apache.hive.jdbc.HiveDriver"); String url = "jdbc:hive2://XX.XX.XX.XX:10000/test1;auth=noSasl"; conn = DriverManager.getConnection(url); PreparedStatement ps=conn.prepareStatement(query); System.out.println("thread "+Thread.currentThread().getName()+"before query execution"); ResultSet rs=ps.executeQuery(); if(rs.next()) { System.out.println("thread "+Thread.currentThread().getName()+"::"+rs.getString(1)); System.out.println("thread "+Thread.currentThread().getName()+"::"+"Got it!"); } } catch (Exception e) { e.printStackTrace(); throw new Error("Problem", e); } finally { try { if (conn != null) { conn.close(); } } catch (SQLException ex) { System.out.println(ex.getMessage()); } } // TODO Auto-generated method stub } } ---- public class MapReduceMain { public static void main(String[] args) { //18 for(int i=0;i<Integer.parseInt(args[0]);i++) { new MapReduceThreadClass(i+1+"",args[1]); } // TODO Auto-generated method stub } }
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
MapReduce