Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Atlas service on Sandbox 2.5 will not start

Atlas service on Sandbox 2.5 will not start

New Contributor

I've started Ambari Infra, HBase, Kafka, Ranger, etc.. but Atlas will not start. FWIW, I'm running this on OS X via Docker. Any help/suggestions would be appreciated.

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 217, in <module>
    MetadataServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 94, in start
    user=params.hbase_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.1830 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.0030 seconds
nil
java exception
ERROR Java::JavaNet::ConnectException: Connection refused
 stdout:
2016-11-15 16:49:59,111 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-15 16:49:59,111 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-15 16:49:59,112 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-15 16:49:59,196 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-15 16:49:59,197 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-15 16:49:59,281 - checked_call returned (0, '')
2016-11-15 16:49:59,282 - Ensuring that hadoop has the correct symlink structure
2016-11-15 16:49:59,282 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-15 16:49:59,383 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-15 16:49:59,384 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-15 16:49:59,384 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-15 16:49:59,464 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-15 16:49:59,465 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-15 16:49:59,550 - checked_call returned (0, '')
2016-11-15 16:49:59,551 - Ensuring that hadoop has the correct symlink structure
2016-11-15 16:49:59,551 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-15 16:49:59,552 - Group['livy'] {}
2016-11-15 16:49:59,553 - Group['spark'] {}
2016-11-15 16:49:59,553 - Group['ranger'] {}
2016-11-15 16:49:59,553 - Group['zeppelin'] {}
2016-11-15 16:49:59,553 - Group['hadoop'] {}
2016-11-15 16:49:59,554 - Group['users'] {}
2016-11-15 16:49:59,554 - Group['knox'] {}
2016-11-15 16:49:59,554 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,555 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,555 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,556 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,557 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-15 16:49:59,557 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,558 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,558 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-15 16:49:59,559 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-11-15 16:49:59,560 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-15 16:49:59,560 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,561 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,562 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,562 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-15 16:49:59,572 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,573 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,573 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,574 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,574 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,575 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,576 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,577 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,577 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-15 16:49:59,578 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-15 16:49:59,580 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-15 16:49:59,642 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-15 16:49:59,643 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-15 16:49:59,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-15 16:49:59,645 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-15 16:49:59,707 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-15 16:49:59,707 - Group['hdfs'] {}
2016-11-15 16:49:59,708 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-11-15 16:49:59,708 - FS Type: 
2016-11-15 16:49:59,708 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-15 16:49:59,721 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-15 16:49:59,722 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-15 16:49:59,734 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-15 16:49:59,804 - Skipping Execute[('setenforce', '0')] due to not_if
2016-11-15 16:49:59,804 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-15 16:49:59,806 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-15 16:49:59,807 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-15 16:49:59,813 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-11-15 16:49:59,815 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-11-15 16:49:59,815 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:49:59,825 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-15 16:49:59,825 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-15 16:49:59,826 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-15 16:49:59,831 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-15 16:49:59,898 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-15 16:50:00,211 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-15 16:50:00,212 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-15 16:50:00,212 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-15 16:50:00,212 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-15 16:50:00,292 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-15 16:50:00,292 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-15 16:50:00,370 - checked_call returned (0, '')
2016-11-15 16:50:00,371 - Ensuring that hadoop has the correct symlink structure
2016-11-15 16:50:00,371 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-15 16:50:00,377 - Execute['source /etc/atlas/conf/atlas-env.sh; /usr/hdp/current/atlas-server/bin/atlas_stop.py'] {'user': 'atlas'}
2016-11-15 16:50:00,786 - File['/var/run/atlas/atlas.pid'] {'action': ['delete']}
2016-11-15 16:50:00,786 - Pid file /var/run/atlas/atlas.pid is empty or does not exist
2016-11-15 16:50:00,788 - Directory['/etc/atlas/conf'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-15 16:50:00,789 - Directory['/var/run/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-15 16:50:00,789 - Directory['/etc/atlas/conf/solr'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'mode': 0755, 'owner': 'atlas', 'recursive_ownership': True}
2016-11-15 16:50:00,815 - Directory['/var/log/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-15 16:50:00,816 - Directory['/usr/hdp/current/atlas-server/data'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2016-11-15 16:50:00,832 - Changing permission for /usr/hdp/current/atlas-server/data from 755 to 644
2016-11-15 16:50:00,832 - Directory['/usr/hdp/current/atlas-server/server/webapp'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2016-11-15 16:50:00,854 - Changing permission for /usr/hdp/current/atlas-server/server/webapp from 755 to 644
2016-11-15 16:50:00,855 - File['/usr/hdp/current/atlas-server/server/webapp/atlas.war'] {'content': StaticFile('/usr/hdp/current/atlas-server/server/webapp/atlas.war')}
2016-11-15 16:50:04,920 - File['/etc/atlas/conf/atlas-log4j.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:50:04,955 - File['/etc/atlas/conf/atlas-env.sh'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0755}
2016-11-15 16:50:04,969 - File['/etc/atlas/conf/solr/solrconfig.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:50:04,971 - PropertiesFile['/etc/atlas/conf/atlas-application.properties'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2016-11-15 16:50:04,985 - Generating properties file: /etc/atlas/conf/atlas-application.properties
2016-11-15 16:50:04,986 - File['/etc/atlas/conf/atlas-application.properties'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:50:05,020 - Writing File['/etc/atlas/conf/atlas-application.properties'] because contents don't match
2016-11-15 16:50:05,021 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-15 16:50:05,024 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-15 16:50:05,028 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2016-11-15 16:50:05,041 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': InlineTemplate(...), 'mode': 0644}
2016-11-15 16:50:05,042 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2016-11-15 16:50:05,044 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because contents don't match
2016-11-15 16:50:05,045 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2016-11-15 16:50:06,204 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --check-config --config-set atlas_configs --retry 30 --interval 5'}
2016-11-15 16:50:07,717 - File['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709/solrconfig.xml'] {'content': InlineTemplate(...), 'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709'}
2016-11-15 16:50:07,779 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709'}
2016-11-15 16:50:08,809 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /etc/atlas/conf/solr --config-set atlas_configs --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709'}
2016-11-15 16:50:08,868 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /etc/atlas/conf/solr --config-set atlas_configs --retry 30 --interval 5'] due to not_if
2016-11-15 16:50:08,869 - Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709'] {'action': ['delete'], 'create_parents': True}
2016-11-15 16:50:08,869 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.691662675709'] and all its content
2016-11-15 16:50:08,870 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection vertex_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2016-11-15 16:50:09,995 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection edge_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2016-11-15 16:50:11,115 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection fulltext_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2016-11-15 16:50:12,167 - File['/var/lib/ambari-agent/tmp/atlas_hbase_setup.rb'] {'content': Template('atlas_hbase_setup.rb.j2'), 'owner': 'hbase', 'group': 'hadoop'}
2016-11-15 16:50:12,168 - Atlas plugin is enabled, configuring Atlas plugin.
2016-11-15 16:50:12,171 - ATLAS: Setup ranger: command retry not enabled thus skipping if ranger admin is down !
2016-11-15 16:50:12,172 - HdfsResource['/ranger/audit'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/etc/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2016-11-15 16:50:12,190 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/ranger/audit?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpshonOA 2>/tmp/tmp0_1JBl''] {'logoutput': None, 'quiet': False}
2016-11-15 16:50:12,385 - call returned (0, '')
2016-11-15 16:50:12,387 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/ranger/audit?op=SETOWNER&user.name=hdfs&owner=atlas&group=hadoop'"'"' 1>/tmp/tmprrqvmW 2>/tmp/tmp1AxPqR''] {'logoutput': None, 'quiet': False}
2016-11-15 16:50:12,486 - call returned (0, '')
2016-11-15 16:50:12,487 - HdfsResource['/ranger/audit/atlas'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/etc/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0700}
2016-11-15 16:50:12,488 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox.hortonworks.com:50070/webhdfs/v1/ranger/audit/atlas?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp141d__ 2>/tmp/tmpyH5eyd''] {'logoutput': None, 'quiet': False}
2016-11-15 16:50:12,581 - call returned (0, '')
2016-11-15 16:50:12,582 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/etc/hadoop/conf', 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp']}
2016-11-15 16:50:12,586 - call['ambari-python-wrap /usr/bin/hdp-select status atlas-server'] {'timeout': 20}
2016-11-15 16:50:12,686 - call returned (0, 'atlas-server - 2.5.0.0-1245')
2016-11-15 16:50:12,687 - RangeradminV2: Skip ranger admin if it's down !
2016-11-15 16:50:13,049 - amb_ranger_admin user already exists.
2016-11-15 16:50:13,380 - Atlas Repository Sandbox_atlas exist
2016-11-15 16:50:13,382 - File['/etc/atlas/conf/ranger-security.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:50:13,383 - Writing File['/etc/atlas/conf/ranger-security.xml'] because contents don't match
2016-11-15 16:50:13,383 - Directory['/etc/ranger/Sandbox_atlas'] {'owner': 'atlas', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-15 16:50:13,385 - Directory['/etc/ranger/Sandbox_atlas/policycache'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-15 16:50:13,386 - File['/etc/ranger/Sandbox_atlas/policycache/atlas_Sandbox_atlas.json'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2016-11-15 16:50:13,387 - XmlConfig['ranger-atlas-audit.xml'] {'group': 'hadoop', 'conf_dir': '/etc/atlas/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2016-11-15 16:50:13,394 - Generating config: /etc/atlas/conf/ranger-atlas-audit.xml
2016-11-15 16:50:13,395 - File['/etc/atlas/conf/ranger-atlas-audit.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-11-15 16:50:13,415 - XmlConfig['ranger-atlas-security.xml'] {'group': 'hadoop', 'conf_dir': '/etc/atlas/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2016-11-15 16:50:13,422 - Generating config: /etc/atlas/conf/ranger-atlas-security.xml
2016-11-15 16:50:13,422 - File['/etc/atlas/conf/ranger-atlas-security.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-11-15 16:50:13,427 - XmlConfig['ranger-policymgr-ssl.xml'] {'group': 'hadoop', 'conf_dir': '/etc/atlas/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2016-11-15 16:50:13,434 - Generating config: /etc/atlas/conf/ranger-policymgr-ssl.xml
2016-11-15 16:50:13,434 - File['/etc/atlas/conf/ranger-policymgr-ssl.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2016-11-15 16:50:13,438 - Execute[('/usr/hdp/2.5.0.0-1245/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.5.0.0-1245/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslKeyStore created successfully!
2016-11-15 16:50:14,433 - Execute[('/usr/hdp/2.5.0.0-1245/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.5.0.0-1245/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslTrustStore created successfully!
2016-11-15 16:50:15,228 - File['/etc/ranger/Sandbox_atlas/cred.jceks'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0640}
2016-11-15 16:50:15,229 - Execute['cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n'] {'tries': 5, 'user': 'hbase', 'try_sleep': 10}
2016-11-15 16:50:42,446 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.2930 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.0120 seconds
nil
java exception
ERROR Java::JavaNet::ConnectException: Connection refused
2016-11-15 16:51:17,980 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.1930 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.0050 seconds
nil
java exception
ERROR Java::JavaNet::ConnectException: Connection refused
2016-11-15 16:51:53,207 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.1770 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.0030 seconds
nil
java exception
ERROR Java::JavaNet::ConnectException: Connection refused
2016-11-15 16:52:28,485 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.2120 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
iemployee
3 row(s) in 0.0030 seconds
nil
java exception
ERROR Java::JavaNet::ConnectException: Connection refused
2016-11-15 16:53:03,544 - Execute['find /var/log/atlas -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'atlas'}
==> /var/log/atlas/atlas.20161025-075114.out <==
{metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=atlas, security.protocol=PLAINTEXT}
{metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=atlas, security.protocol=PLAINTEXT}
{metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=atlas, security.protocol=PLAINTEXT}
==> /var/log/atlas/atlas.20161025-075114.err <== 

Command failed after 1 tries

3 REPLIES 3

Re: Atlas service on Sandbox 2.5 will not start

@Brandon Harris

Can you paste the contents of file $ cat /var/log/atlas/atlas.20161025-075114.err

Re: Atlas service on Sandbox 2.5 will not start

New Contributor

Conveniently it is empty. :( It does look like this may be HBase related. It seems as if the region server is dying shortly after I start (or restart) the HBase related services.

Re: Atlas service on Sandbox 2.5 will not start

@Brandon Harris

Atlas is not able to connect to HBase. You indicated that you started it. What do the Hbase logs show?

Don't have an account?
Coming from Hortonworks? Activate your account here