Support Questions

Find answers, ask questions, and share your expertise

Why does the install of Accumulo not work with HDP Sandbox?

avatar
Contributor

I get the following error trying to install Accumulo on the HDP Sandbox. I need Accumulo to run sqoop.

Does anyone know how to fix this Accumulo installation error - seems like a repository error.

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 65, in <module>
    AccumuloClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 36, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 395, in install_packages
    Package(name)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 45, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_3_*'' returned 1. Error: Cannot find a valid baseurl for repo: base
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_... error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrorlist.centos.org'"










 stdout:
2015-12-15 15:04:12,132 - Group['hadoop'] {}
2015-12-15 15:04:12,133 - Group['users'] {}
2015-12-15 15:04:12,133 - Group['zeppelin'] {}
2015-12-15 15:04:12,134 - Group['knox'] {}
2015-12-15 15:04:12,134 - Group['ranger'] {}
2015-12-15 15:04:12,134 - Group['spark'] {}
2015-12-15 15:04:12,134 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-15 15:04:12,135 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,135 - User['zeppelin'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,136 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-15 15:04:12,136 - User['flume'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,137 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,137 - User['knox'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,138 - User['ranger'] {'gid': 'hadoop', 'groups': ['ranger']}
2015-12-15 15:04:12,138 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,139 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,140 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,140 - User['accumulo'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,140 - Adding user User['accumulo']
2015-12-15 15:04:12,227 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,228 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-15 15:04:12,229 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,229 - User['kafka'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,230 - User['falcon'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-15 15:04:12,230 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,231 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,231 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,232 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,232 - User['atlas'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-15 15:04:12,233 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-15 15:04:12,234 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2015-12-15 15:04:12,238 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2015-12-15 15:04:12,238 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2015-12-15 15:04:12,239 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-15 15:04:12,240 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2015-12-15 15:04:12,243 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2015-12-15 15:04:12,244 - Group['hdfs'] {'ignore_failures': False}
2015-12-15 15:04:12,244 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2015-12-15 15:04:12,245 - Directory['/etc/hadoop'] {'mode': 0755}
2015-12-15 15:04:12,256 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2015-12-15 15:04:12,256 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2015-12-15 15:04:12,267 - Repository['HDP-2.3'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.2.0/', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2015-12-15 15:04:12,319 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2015-12-15 15:04:12,332 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2015-12-15 15:04:12,334 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2015-12-15 15:04:12,346 - Package['unzip'] {}
2015-12-15 15:04:12,482 - Skipping installation of existing package unzip
2015-12-15 15:04:12,482 - Package['curl'] {}
2015-12-15 15:04:12,553 - Skipping installation of existing package curl
2015-12-15 15:04:12,553 - Package['hdp-select'] {}
2015-12-15 15:04:12,623 - Skipping installation of existing package hdp-select
2015-12-15 15:04:12,755 - Package['accumulo_2_3_*'] {}
2015-12-15 15:04:12,890 - Installing package accumulo_2_3_* ('/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_3_*'')

I think this might be the problem

14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrorlist.centos.org'"
1 ACCEPTED SOLUTION

avatar
Rising Star

It appears you cannot resolve mirrorlist.centos.org via DNS from your virtual machine.

Does the follow return a result?

nslookup mirrorlist.centos.org

If not, I expect you have configured the VM with a Host-Only adapter, which will not allow the VM to access the internet.

View solution in original post

4 REPLIES 4

avatar
Rising Star

It appears you cannot resolve mirrorlist.centos.org via DNS from your virtual machine.

Does the follow return a result?

nslookup mirrorlist.centos.org

If not, I expect you have configured the VM with a Host-Only adapter, which will not allow the VM to access the internet.

avatar
Contributor

Looks like the machine needed a proxy to connect to the internet. The proxy solved the problem. Now I have a bad situation as Ambari thinks accumulo is installed but it really isn't - I hope installing accumulo by hand will work out ...

yum -d 0 -e 0 -y install 'accumulo_2_3_*'

avatar
Contributor

I just found the "reinstall" button in Ambari that seemed to help me after i broke the first installation. thanks for your help!

avatar
Contributor

Unfortunatly after the install it can't start the master with the following error.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_master.py", line 24, in <module>
    AccumuloScript('master').execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_script.py", line 79, in start
    self.configure(env) # for security
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_script.py", line 73, in configure
    setup_conf_dir(name=self.component)
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_configuration.py", line 167, in setup_conf_dir
    mode=0700
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 394, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 391, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 244, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 228, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 284, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 202, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://sandbox.hortonworks.com:50070/webhdfs/v1/user/accumulo?op=GETFILESTATUS&user.name=hdfs'' returned status_code=503. 
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>ERROR: The requested URL could not be retrieved</title>
<style type="text/css"><!-- 
 /*
 Stylesheet for Squid Error pages
 Adapted from design by Free CSS Templates
 http://www.freecsstemplates.org
 Released for free under a Creative Commons Attribution 2.5 License
*/

/* Page basics */
* {
	font-family: verdana, sans-serif;
}

html body {
	margin: 0;
	padding: 0;
	background: #efefef;
	font-size: 12px;
	color: #1e1e1e;
}

/* Page displayed title area */
#titles {
	margin-left: 15px;
	padding: 10px;
	padding-left: 100px;
	background: url('http://www.squid-cache.org/Artwork/SN.png') no-repeat left;
}

/* initial title */
#titles h1 {
	color: #000000;
}
#titles h2 {
	color: #000000;
}

/* special event: FTP success page titles */
#titles ftpsuccess {
	background-color:#00ff00;
	width:100%;
}

/* Page displayed body content area */
#content {
	padding: 10px;
	background: #ffffff;
}

/* General text */
p {
}

/* error brief description */
#error p {
}

/* some data which may have caused the problem */
#data {
}

/* the error message received from the system or other software */
#sysmsg {
}

pre {
    font-family:sans-serif;
}

/* special event: FTP / Gopher directory listing */
#dirmsg {
    font-family: courier;
    color: black;
    font-size: 10pt;
}
#dirlisting {
    margin-left: 2%;
    margin-right: 2%;
}
#dirlisting tr.entry td.icon,td.filename,td.size,td.date {
    border-bottom: groove;
}
#dirlisting td.size {
    width: 50px;
    text-align: right;
    padding-right: 5px;
}

/* horizontal lines */
hr {
	margin: 0;
}

/* page displayed footer area */
#footer {
	font-size: 9px;
	padding-left: 10px;
}


body
:lang(fa) { direction: rtl; font-size: 100%; font-family: Tahoma, Roya, sans-serif; float: right; }
:lang(he) { direction: rtl; }
 --></style>
</head><body id=ERR_DNS_FAIL>
<div id="titles">
<h1>ERROR</h1>
<h2>The requested URL could not be retrieved</h2>
</div>
<hr>

<div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="http://sandbox.hortonworks.com:50070/webhdfs/v1/user/accumulo?">http://sandbox.hortonworks.com:50070/webhdfs/v1/user/accumulo?</a></p>

<blockquote id="error">
<p><b>Unable to determine IP address from host name <q>sandbox.hortonworks.com</q></b></p>
</blockquote>

<p>The DNS server returned:</p>
<blockquote id="data">
<pre>Name Error: The domain name does not exist.</pre>
</blockquote>

<p>This means that the cache was not able to resolve the hostname presented in the URL. Check if the address is correct.</p>

<p>Your cache administrator is <a href="mailto:hotline@kps-consulting.com?subject=CacheErrorInfo%20-%20ERR_DNS_FAIL&body=CacheHost%3A%20kg-ffm-proxy.kps.kps-consulting.com%0D%0AErrPage%3A%20ERR_DNS_FAIL%0D%0AErr%3A%20%5Bnone%5D%0D%0ADNS%20ErrMsg%3A%20Name%20Error%3A%20The%20domain%20name%20does%20not%20exist.%0D%0ATimeStamp%3A%20Thu,%2017%20Dec%202015%2016%3A16%3A52%20GMT%0D%0A%0D%0AClientIP%3A%2010.13.8.41%0D%0AServerIP%3A%20sandbox.hortonworks.com%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%20%2Fwebhdfs%2Fv1%2Fuser%2Faccumulo%3Fop%3DGETFILESTATUS%26user.name%3Dhdfs%20HTTP%2F1.1%0AUser-Agent%3A%20curl%2F7.19.7%20(x86_64-redhat-linux-gnu)%20libcurl%2F7.19.7%20NSS%2F3.19.1%20Basic%20ECC%20zlib%2F1.2.3%20libidn%2F1.18%20libssh2%2F1.4.2%0D%0AHost%3A%20sandbox.hortonworks.com%3A50070%0D%0AAccept%3A%20*%2F*%0D%0AProxy-Connection%3A%20Keep-Alive%0D%0A%0D%0A%0D%0A">hotline@kps-consulting.com</a>.</p>
<br>
</div>

<hr>
<div id="footer">
<p>Generated Thu, 17 Dec 2015 16:16:52 GMT by kg-ffm-proxy.kps.kps-consulting.com (squid/3.1.19)</p>
<!-- ERR_DNS_FAIL -->
</div>
</body></html>
            stdout:    /var/lib/ambari-agent/data/output-417.txt 

            2015-12-17 16:16:52,049 - Group['hadoop'] {}
2015-12-17 16:16:52,050 - Group['users'] {}
2015-12-17 16:16:52,050 - Group['zeppelin'] {}
2015-12-17 16:16:52,050 - Group['knox'] {}
2015-12-17 16:16:52,051 - Group['ranger'] {}
2015-12-17 16:16:52,051 - Group['spark'] {}
2015-12-17 16:16:52,051 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-17 16:16:52,051 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,052 - User['zeppelin'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,053 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-17 16:16:52,053 - User['flume'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,054 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,054 - User['knox'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,055 - User['ranger'] {'gid': 'hadoop', 'groups': ['ranger']}
2015-12-17 16:16:52,055 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,056 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,056 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,057 - User['accumulo'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,058 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,058 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-17 16:16:52,059 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,059 - User['kafka'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,060 - User['falcon'] {'gid': 'hadoop', 'groups': ['users']}
2015-12-17 16:16:52,060 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,060 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,061 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,061 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,062 - User['atlas'] {'gid': 'hadoop', 'groups': ['hadoop']}
2015-12-17 16:16:52,063 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-17 16:16:52,064 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2015-12-17 16:16:52,068 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2015-12-17 16:16:52,068 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2015-12-17 16:16:52,069 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-12-17 16:16:52,070 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2015-12-17 16:16:52,074 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2015-12-17 16:16:52,074 - Group['hdfs'] {'ignore_failures': False}
2015-12-17 16:16:52,074 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2015-12-17 16:16:52,075 - Directory['/etc/hadoop'] {'mode': 0755}
2015-12-17 16:16:52,086 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2015-12-17 16:16:52,087 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2015-12-17 16:16:52,097 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2015-12-17 16:16:52,102 - Skipping Execute[('setenforce', '0')] due to not_if
2015-12-17 16:16:52,102 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2015-12-17 16:16:52,104 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2015-12-17 16:16:52,105 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2015-12-17 16:16:52,109 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-12-17 16:16:52,110 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2015-12-17 16:16:52,111 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-12-17 16:16:52,118 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-12-17 16:16:52,118 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-12-17 16:16:52,119 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-12-17 16:16:52,123 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2015-12-17 16:16:52,126 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2015-12-17 16:16:52,244 - Directory['/usr/hdp/current/accumulo-master/conf'] {'owner': 'accumulo', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-12-17 16:16:52,245 - Directory['/usr/hdp/current/accumulo-master/conf/server'] {'owner': 'accumulo', 'group': 'hadoop', 'recursive': True, 'mode': 0700}
2015-12-17 16:16:52,246 - XmlConfig['accumulo-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/accumulo-master/conf/server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'accumulo', 'configurations': ...}
2015-12-17 16:16:52,255 - Generating config: /usr/hdp/current/accumulo-master/conf/server/accumulo-site.xml
2015-12-17 16:16:52,255 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-site.xml'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2015-12-17 16:16:52,266 - Directory['/var/run/accumulo'] {'owner': 'accumulo', 'group': 'hadoop', 'recursive': True}
2015-12-17 16:16:52,267 - Directory['/var/log/accumulo'] {'owner': 'accumulo', 'group': 'hadoop', 'recursive': True}
2015-12-17 16:16:52,270 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-env.sh'] {'content': InlineTemplate(...), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2015-12-17 16:16:52,270 - PropertiesFile['/usr/hdp/current/accumulo-master/conf/server/client.conf'] {'owner': 'accumulo', 'group': 'hadoop', 'properties': {'instance.zookeeper.host': 'sandbox.hortonworks.com:2181', 'instance.name': 'hdp-accumulo-instance', 'instance.zookeeper.timeout': '30s'}}
2015-12-17 16:16:52,274 - Generating properties file: /usr/hdp/current/accumulo-master/conf/server/client.conf
2015-12-17 16:16:52,274 - File['/usr/hdp/current/accumulo-master/conf/server/client.conf'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,276 - Writing File['/usr/hdp/current/accumulo-master/conf/server/client.conf'] because contents don't match
2015-12-17 16:16:52,276 - File['/usr/hdp/current/accumulo-master/conf/server/log4j.properties'] {'content': ..., 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2015-12-17 16:16:52,277 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/auditLog.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,279 - File['/usr/hdp/current/accumulo-master/conf/server/auditLog.xml'] {'content': Template('auditLog.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,279 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/generic_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,282 - File['/usr/hdp/current/accumulo-master/conf/server/generic_logger.xml'] {'content': Template('generic_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,283 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/monitor_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,284 - File['/usr/hdp/current/accumulo-master/conf/server/monitor_logger.xml'] {'content': Template('monitor_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,285 - File['/usr/hdp/current/accumulo-master/conf/server/accumulo-metrics.xml'] {'content': StaticFile('accumulo-metrics.xml'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2015-12-17 16:16:52,285 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/tracers'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,287 - File['/usr/hdp/current/accumulo-master/conf/server/tracers'] {'content': Template('tracers.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,287 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/gc'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,289 - File['/usr/hdp/current/accumulo-master/conf/server/gc'] {'content': Template('gc.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,289 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/monitor'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,291 - File['/usr/hdp/current/accumulo-master/conf/server/monitor'] {'content': Template('monitor.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,291 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/slaves'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,293 - File['/usr/hdp/current/accumulo-master/conf/server/slaves'] {'content': Template('slaves.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,293 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/masters'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,295 - File['/usr/hdp/current/accumulo-master/conf/server/masters'] {'content': Template('masters.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,295 - TemplateConfig['/usr/hdp/current/accumulo-master/conf/server/hadoop-metrics2-accumulo.properties'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2015-12-17 16:16:52,299 - File['/usr/hdp/current/accumulo-master/conf/server/hadoop-metrics2-accumulo.properties'] {'content': Template('hadoop-metrics2-accumulo.properties.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2015-12-17 16:16:52,300 - HdfsResource['/user/accumulo'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'accumulo', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0700}
2015-12-17 16:16:52,302 - checked_call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/user/accumulo?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmps_Hczm 2>/tmp/tmpLmfalF''] {'logoutput': None, 'quiet': False}
2015-12-17 16:16:52,325 - checked_call returned (0, '')