Member since
07-31-2018
11
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13494 | 01-31-2020 10:51 AM |
01-31-2020
10:51 AM
Hi Yes,It is resolved. Thank you.
... View more
10-24-2018
11:34 AM
Hi Team Getting this issue while testing db connection in Ambari Server console for creating HDP cluster on single node. Error stderr:
2018-10-24 13:51:34,463 - Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 503: Service Unavailable
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 414, in execute_db_connection_check
download_file(check_db_connection_url, check_db_connection_path)
File "/usr/lib/ambari-agent/lib/ambari_commons/inet_utils.py", line 68, in download_file
force_download_file(link, destination, chunk_size, progress_func = progress_func)
File "/usr/lib/ambari-agent/lib/ambari_commons/inet_utils.py", line 171, in force_download_file
response = urllib2.urlopen(request)
File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 503: Service Unavailable
2018-10-24 13:51:34,466 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 503: Service Unavailable
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 546, in <module>
CheckHost().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 503: Service Unavailable stdout:
2018-10-24 13:51:34,437 - Host checks started.
2018-10-24 13:51:34,438 - Check execute list: db_connection_check
2018-10-24 13:51:34,438 - DB connection check started.
2018-10-24 13:51:34,463 - Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 503: Service Unavailable
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 414, in execute_db_connection_check
download_file(check_db_connection_url, check_db_connection_path)
File "/usr/lib/ambari-agent/lib/ambari_commons/inet_utils.py", line 68, in download_file
force_download_file(link, destination, chunk_size, progress_func = progress_func)
File "/usr/lib/ambari-agent/lib/ambari_commons/inet_utils.py", line 171, in force_download_file
response = urllib2.urlopen(request)
File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 503: Service Unavailable
2018-10-24 13:51:34,466 - Host checks completed.
2018-10-24 13:51:34,466 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Error downloading DBConnectionVerification.jar from Ambari Server resources. Check network access to Ambari Server.
HTTP Error 503: Service Unavailable
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
08-20-2018
07:00 AM
Thank It is removed from Ambari 2.7 refer https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/bhvr_changes_upgrade_hdp3_amb27.html
... View more
08-20-2018
06:01 AM
Hi Team, pig view is not listed in select list of views list in HDP 3.0. Please help
... View more
Labels:
- Labels:
-
Apache Pig
08-02-2018
12:26 AM
Hi, I found out this line in Tserver.log " stderr: /var/lib/ambari-agent/data/errors-862.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ACCUMULO/package/scripts/accumulo_master.py", line 24, in <module>
AccumuloScript('master').execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ACCUMULO/package/scripts/accumulo_script.py", line 60, in start
accumulo_service( self.component, action = 'start')
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ACCUMULO/package/scripts/accumulo_service.py", line 46, in accumulo_service
user=params.accumulo_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo-master/conf/server /usr/hdp/current/accumulo-client/bin/accumulo org.apache.accumulo.master.state.SetGoalState NORMAL' returned 1. 2018-08-01 19:53:18,132 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
2018-08-01 19:53:18,136 [server.Accumulo] INFO : Attempting to talk to zookeeper
2018-08-01 19:53:18,275 [server.Accumulo] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
2018-08-01 19:53:18,402 [server.Accumulo] INFO : Connected to HDFS
2018-08-01 19:53:18,456 [zookeeper.ZooUtil] ERROR: multiple potential instances in hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020/apps/accumulo/data/instance_id
2018-08-01 19:53:18,457 [start.Main] ERROR: Thread 'org.apache.accumulo.master.state.SetGoalState' died.
java.lang.RuntimeException: Accumulo found multiple possible instance ids in hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020/apps/accumulo/data/instance_id
at org.apache.accumulo.core.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:69)
at org.apache.accumulo.core.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:51)
at org.apache.accumulo.server.client.HdfsZooInstance._getInstanceID(HdfsZooInstance.java:137)
at org.apache.accumulo.server.client.HdfsZooInstance.getInstanceID(HdfsZooInstance.java:121)
at org.apache.accumulo.core.zookeeper.ZooUtil.getRoot(ZooUtil.java:40)
at org.apache.accumulo.master.state.SetGoalState.main(SetGoalState.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.accumulo.start.Main$2.run(Main.java:130)
at java.lang.Thread.run(Thread.java:745) stdout: /var/lib/ambari-agent/data/output-862.txt 2018-08-01 19:53:11,283 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634
2018-08-01 19:53:11,311 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf
2018-08-01 19:53:11,602 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.0.0-1634 -> 3.0.0.0-1634
2018-08-01 19:53:11,609 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf
2018-08-01 19:53:11,612 - Group['kms'] {}
2018-08-01 19:53:11,614 - Group['livy'] {}
2018-08-01 19:53:11,614 - Group['spark'] {}
2018-08-01 19:53:11,614 - Group['ranger'] {}
2018-08-01 19:53:11,615 - Group['hdfs'] {}
2018-08-01 19:53:11,615 - Group['zeppelin'] {}
2018-08-01 19:53:11,615 - Group['hadoop'] {}
2018-08-01 19:53:11,615 - Group['users'] {}
2018-08-01 19:53:11,616 - Group['knox'] {}
2018-08-01 19:53:11,617 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,618 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,620 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,622 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,623 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,625 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,627 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,628 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,630 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,631 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,633 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,635 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,636 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,638 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-01 19:53:11,640 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-01 19:53:11,641 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,643 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,644 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,646 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-01 19:53:11,648 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,649 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-08-01 19:53:11,651 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,653 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,654 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 19:53:11,656 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2018-08-01 19:53:11,657 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-01 19:53:11,661 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-08-01 19:53:11,670 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-08-01 19:53:11,670 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-08-01 19:53:11,671 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-01 19:53:11,674 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-01 19:53:11,675 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-08-01 19:53:11,687 - call returned (0, '1013')
2018-08-01 19:53:11,687 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-08-01 19:53:11,695 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013'] due to not_if
2018-08-01 19:53:11,695 - Group['hdfs'] {}
2018-08-01 19:53:11,696 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-08-01 19:53:11,697 - FS Type: HDFS
2018-08-01 19:53:11,697 - Directory['/etc/hadoop'] {'mode': 0755}
2018-08-01 19:53:11,718 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-01 19:53:11,719 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-08-01 19:53:11,741 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-08-01 19:53:11,755 - Skipping Execute[('setenforce', '0')] due to only_if
2018-08-01 19:53:11,756 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-08-01 19:53:11,759 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-08-01 19:53:11,760 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2018-08-01 19:53:11,760 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-08-01 19:53:11,765 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-08-01 19:53:11,768 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-08-01 19:53:11,776 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-08-01 19:53:11,789 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-01 19:53:11,790 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-08-01 19:53:11,792 - File['/usr/hdp/3.0.0.0-1634/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-08-01 19:53:11,797 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-08-01 19:53:11,803 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-08-01 19:53:11,809 - Skipping unlimited key JCE policy check and setup since it is not required
2018-08-01 19:53:12,251 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf
2018-08-01 19:53:12,253 - Directory['/usr/hdp/current/accumulo-master/conf/server'] {'owner': 'accumulo', 'group': 'hadoop', 'create_parents': True, 'mode': 0700}
2018-08-01 19:53:12,256 - XmlConfig['accumulo-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/accumulo-master/conf/server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'accumulo', 'configurations': ...}
2018-08-01 19:53:12,270 - Generating config: /usr/hdp/current/accumulo-master/conf/server/accumulo-site.xml
2018-08-01 19:53:12,270 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-site.xml'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2018-08-01 19:53:12,286 - Directory['/var/run/accumulo'] {'owner': 'accumulo', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-08-01 19:53:12,287 - Directory['/var/log/accumulo'] {'owner': 'accumulo', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-08-01 19:53:12,294 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-env.sh'] {'content': InlineTemplate(...), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-08-01 19:53:12,295 - Writing File['/usr/hdp/current/accumulo-master/conf/server/accumulo-env.sh'] because contents don't match
2018-08-01 19:53:12,295 - PropertiesFile['/usr/hdp/current/accumulo-master/conf/server/client.conf'] {'owner': 'accumulo', 'group': 'hadoop', 'properties': {'instance.zookeeper.host': u'dpysydirbd201.sl.bluecloud.ibm.com:2181,dpysydirbd301.sl.bluecloud.ibm.com:2181,dpysydirbd101.sl.bluecloud.ibm.com:2181,dpysydirbm01.sl.bluecloud.ibm.com:2181', 'instance.name': u'hdp-accumulo-instance', 'instance.zookeeper.timeout': u'30s'}}
2018-08-01 19:53:12,300 - Generating properties file: /usr/hdp/current/accumulo-master/conf/server/client.conf
2018-08-01 19:53:12,300 - File['/usr/hdp/current/accumulo-master/conf/server/client.conf'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-01 19:53:12,303 - Writing File['/usr/hdp/current/accumulo-master/conf/server/client.conf'] because contents don't match
2018-08-01 19:53:12,304 - File['/usr/hdp/current/accumulo-master/conf/server/log4j.properties'] {'content': ..., 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-08-01 19:53:12,305 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/auditLog.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,308 - File['/usr/hdp/current/accumulo-master/conf/server/auditLog.xml'] {'content': Template('auditLog.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,309 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/generic_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,313 - File['/usr/hdp/current/accumulo-master/conf/server/generic_logger.xml'] {'content': Template('generic_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,313 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/monitor_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,316 - File['/usr/hdp/current/accumulo-master/conf/server/monitor_logger.xml'] {'content': Template('monitor_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,317 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-metrics.xml'] {'content': StaticFile('accumulo-metrics.xml'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-08-01 19:53:12,319 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/tracers'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,321 - File['/usr/hdp/current/accumulo-master/conf/server/tracers'] {'content': Template('tracers.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,322 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/gc'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,324 - File['/usr/hdp/current/accumulo-master/conf/server/gc'] {'content': Template('gc.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,324 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/monitor'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,326 - File['/usr/hdp/current/accumulo-master/conf/server/monitor'] {'content': Template('monitor.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,327 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/slaves'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,329 - File['/usr/hdp/current/accumulo-master/conf/server/slaves'] {'content': Template('slaves.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,330 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/masters'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,332 - File['/usr/hdp/current/accumulo-master/conf/server/masters'] {'content': Template('masters.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,333 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/hadoop-metrics2-accumulo.properties'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-08-01 19:53:12,340 - File['/usr/hdp/current/accumulo-master/conf/server/hadoop-metrics2-accumulo.properties'] {'content': Template('hadoop-metrics2-accumulo.properties.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-08-01 19:53:12,341 - HdfsResource['/user/accumulo'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'accumulo', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700}
2018-08-01 19:53:12,344 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dpysydirbm01.sl.bluecloud.ibm.com:50070/webhdfs/v1/user/accumulo?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpZXhbxl 2>/tmp/tmpgmtlTD''] {'logoutput': None, 'quiet': False}
2018-08-01 19:53:12,434 - call returned (0, '')
2018-08-01 19:53:12,434 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"hdfs","length":0,"modificationTime":1532938319785,"owner":"accumulo","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2018-08-01 19:53:12,436 - HdfsResource['hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020/apps/accumulo'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'accumulo', 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700}
2018-08-01 19:53:12,437 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dpysydirbm01.sl.bluecloud.ibm.com:50070/webhdfs/v1/apps/accumulo?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpYswyLG 2>/tmp/tmpJXgUKT''] {'logoutput': None, 'quiet': False}
2018-08-01 19:53:12,521 - call returned (0, '')
2018-08-01 19:53:12,521 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16394,"group":"hdfs","length":0,"modificationTime":1532938325547,"owner":"accumulo","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2018-08-01 19:53:12,523 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.0.0.0-1634/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.0.0.0-1634/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2018-08-01 19:53:12,525 - File['/var/lib/ambari-agent/tmp/pass'] {'content': InlineTemplate(...), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0600}
2018-08-01 19:53:12,526 - Writing File['/var/lib/ambari-agent/tmp/pass'] because contents don't match
2018-08-01 19:53:12,527 - Execute['cat /var/lib/ambari-agent/tmp/pass | ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo-master/conf/server /usr/hdp/current/accumulo-client/bin/accumulo init --instance-name hdp-accumulo-instance --clear-instance-name >/var/log/accumulo/accumulo-init.out 2>/var/log/accumulo/accumulo-init.err'] {'logoutput': True, 'not_if': "ambari-sudo.sh su accumulo -l -s /bin/bash -c ' /usr/hdp/3.0.0.0-1634/hadoop/bin/hadoop --config /usr/hdp/3.0.0.0-1634/hadoop/conf fs -stat hdfs://dpysydirbm01.sl.bluecloud.ibm.com:8020/apps/accumulo/data'", 'user': 'accumulo'}
2018-08-01 19:53:15,471 - Skipping Execute['cat /var/lib/ambari-agent/tmp/pass | ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo-master/conf/server /usr/hdp/current/accumulo-client/bin/accumulo init --instance-name hdp-accumulo-instance --clear-instance-name >/var/log/accumulo/accumulo-init.out 2>/var/log/accumulo/accumulo-init.err'] due to not_if
2018-08-01 19:53:15,471 - File['/var/lib/ambari-agent/tmp/pass'] {'action': ['delete']}
2018-08-01 19:53:15,472 - Deleting File['/var/lib/ambari-agent/tmp/pass']
2018-08-01 19:53:15,473 - Directory['/home/accumulo'] {'owner': 'accumulo', 'group': 'hadoop', 'recursive_ownership': True}
2018-08-01 19:53:15,475 - Execute['ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo-master/conf/server /usr/hdp/current/accumulo-client/bin/accumulo org.apache.accumulo.master.state.SetGoalState NORMAL'] {'not_if': "ambari-sudo.sh su accumulo -l -s /bin/bash -c 'ls /var/run/accumulo/accumulo-accumulo-master.pid >/dev/null 2>&1 && ps `cat /var/run/accumulo/accumulo-accumulo-master.pid` >/dev/null 2>&1'", 'user': 'accumulo'} Command failed after 1 tries
" Please help me here.
... View more
08-01-2018
10:51 AM
Connection failed: [Errno 111] Connection refused to server:9997
... View more
Labels:
- Labels:
-
Apache Accumulo
07-31-2018
01:58 PM
0: jdbc:hive2://dpysydirbd201.sl.bluecloud.ib> create database test; INFO : Compiling command(queryId=hive_20180731025445_cecd377d-4927-4566-8c2b-6305ee07d798): create database test INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20180731025445_cecd377d-4927-4566-8c2b-6305ee07d798); Time taken: 0.021 seconds INFO : Executing command(queryId=hive_20180731025445_cecd377d-4927-4566-8c2b-6305ee07d798): create database test INFO : Starting task [Stage-0:DDL] in serial mode ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: Permission denied: user=hive, access=WRITE, inode="/warehouse/tablespace/external/hive":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:512) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:305) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1850) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1834) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1784) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:7767) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2217) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678) ) INFO : Completed executing command(queryId=hive_20180731025445_cecd377d-4927-4566-8c2b-6305ee07d798); Time taken: 0.019 seconds Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: Permission denied: user=hive, access=WRITE, inode="/warehouse/tablespace/external/hive":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:512) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:305) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1850) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1834) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1784) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:7767) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2217) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678) ) (state=08S01,code=1)
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)