Support Questions
Find answers, ask questions, and share your expertise

Atlas Metadata Server Was Not Starting ?

New Contributor

I am unable to start atlas metadata server...i tired all the solutions that i seen in the community page but still i am unable to resolve it..

my hbase,atlas infra and all the components in the ambari are running except atlas metadata server...

i tried with atlas.war and i have files atlas_start.py, atlas-env.sh in the respective folders...

please help to resolve it...

12 REPLIES 12

Contributor

@M Sainadh

To understand the issue in better way, could you please post the Atlas Metadata Server logs...

New Contributor

stderr: /var/lib/ambari-agent/data/errors-215.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 175, in <module>
    MetadataServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 96, in start
    user=params.hbase_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)

stdout: /var/lib/ambari-agent/data/output-215.txt

2018-07-02 14:19:53,284 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,286 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,386 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,387 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,388 - Group['livy'] {}
2018-07-02 14:19:53,389 - Group['spark'] {}
2018-07-02 14:19:53,389 - Group['ranger'] {}
2018-07-02 14:19:53,389 - Group['hdfs'] {}
2018-07-02 14:19:53,389 - Group['zeppelin'] {}
2018-07-02 14:19:53,390 - Group['hadoop'] {}
2018-07-02 14:19:53,390 - Group['users'] {}
2018-07-02 14:19:53,390 - Group['knox'] {}
2018-07-02 14:19:53,390 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,393 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,394 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,395 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,396 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,397 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,397 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,398 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2018-07-02 14:19:53,399 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,399 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2018-07-02 14:19:53,400 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,401 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,402 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,403 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,403 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,404 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-07-02 14:19:53,405 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,406 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,407 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,407 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,408 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,409 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,409 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,411 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-07-02 14:19:53,433 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-07-02 14:19:53,434 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:19:53,434 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,436 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,437 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-07-02 14:19:53,461 - call returned (0, '1002')
2018-07-02 14:19:53,461 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-07-02 14:19:53,483 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if
2018-07-02 14:19:53,484 - Group['hdfs'] {}
2018-07-02 14:19:53,484 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2018-07-02 14:19:53,485 - FS Type: 
2018-07-02 14:19:53,485 - Directory['/etc/hadoop'] {'mode': 0755}
2018-07-02 14:19:53,497 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,497 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-07-02 14:19:53,513 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-07-02 14:19:53,537 - Skipping Execute[('setenforce', '0')] due to not_if
2018-07-02 14:19:53,538 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:19:53,540 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-07-02 14:19:53,540 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-07-02 14:19:53,543 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-07-02 14:19:53,545 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-07-02 14:19:53,550 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:53,559 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,560 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-07-02 14:19:53,561 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,565 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:53,589 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-07-02 14:19:53,899 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,900 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,901 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,905 - Directory['/usr/hdp/current/atlas-server/conf'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,908 - Directory['/var/run/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,909 - Directory['/usr/hdp/current/atlas-server/conf/solr'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'mode': 0755, 'owner': 'atlas', 'recursive_ownership': True}
2018-07-02 14:19:53,910 - Directory['/var/log/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,911 - Directory['/usr/hdp/current/atlas-server/data'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2018-07-02 14:19:53,911 - Changing permission for /usr/hdp/current/atlas-server/data from 755 to 644
2018-07-02 14:19:53,912 - Directory['/usr/hdp/current/atlas-server/server/webapp'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2018-07-02 14:19:53,912 - Changing permission for /usr/hdp/current/atlas-server/server/webapp from 755 to 644
2018-07-02 14:19:53,912 - File['/usr/hdp/current/atlas-server/server/webapp/atlas.war'] {'content': StaticFile('/usr/hdp/current/atlas-server/server/webapp/atlas.war')}
2018-07-02 14:19:57,549 - File['/usr/hdp/current/atlas-server/conf/atlas-log4j.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:57,617 - File['/usr/hdp/current/atlas-server/conf/atlas-env.sh'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0755}
2018-07-02 14:19:57,625 - ModifyPropertiesFile['/usr/hdp/current/atlas-server/conf/users-credentials.properties'] {'owner': 'atlas', 'properties': {'admin': 'ROLE_ADMIN::8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918'}}
2018-07-02 14:19:57,662 - Modifying existing properties file: /usr/hdp/current/atlas-server/conf/users-credentials.properties
2018-07-02 14:19:57,664 - File['/usr/hdp/current/atlas-server/conf/users-credentials.properties'] {'owner': 'atlas', 'content': '#username=group::sha256-password\nadmin=ROLE_ADMIN::8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918\nrangertagsync=RANGER_TAG_SYNC::e3f67240f5117d1753c940dae9eea772d36ed5fe9bd9c94a300e40413f1afb9d\nholger_gov=ROLE_ADMIN::4d20573d20756b4b2cd80e41def04b52907710912b038f0f901d4b568e254fc6\n', 'group': None, 'mode': None, 'encoding': 'utf-8'}
2018-07-02 14:19:57,667 - Execute[('chown', 'atlas:hadoop', '/usr/hdp/current/atlas-server/conf/policy-store.txt')] {'sudo': True}
2018-07-02 14:19:57,845 - Execute[('chmod', '644', '/usr/hdp/current/atlas-server/conf/policy-store.txt')] {'sudo': True}
2018-07-02 14:19:57,918 - Execute[('chown', 'atlas:hadoop', '/usr/hdp/current/atlas-server/conf/users-credentials.properties')] {'sudo': True}
2018-07-02 14:19:57,958 - Execute[('chmod', '644', '/usr/hdp/current/atlas-server/conf/users-credentials.properties')] {'sudo': True}
2018-07-02 14:19:58,015 - File['/usr/hdp/current/atlas-server/conf/solr/solrconfig.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:58,017 - PropertiesFile['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2018-07-02 14:19:58,021 - Generating properties file: /usr/hdp/current/atlas-server/conf/atlas-application.properties
2018-07-02 14:19:58,021 - File['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:58,063 - Writing File['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] because contents don't match
2018-07-02 14:19:58,064 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:58,065 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:58,066 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2018-07-02 14:19:58,072 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': ..., 'mode': 0644}
2018-07-02 14:19:58,073 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2018-07-02 14:19:58,073 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because contents don't match
2018-07-02 14:19:58,074 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2018-07-02 14:19:58,645 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --check-config --config-set atlas_configs --retry 30 --interval 5'}
2018-07-02 14:19:59,558 - File['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481/solrconfig.xml'] {'content': InlineTemplate(...), 'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:19:59,611 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:20:00,108 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/atlas-server/conf/solr --config-set atlas_configs --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:20:00,135 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/atlas-server/conf/solr --config-set atlas_configs --retry 30 --interval 5'] due to not_if
2018-07-02 14:20:00,136 - Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'] {'action': ['delete'], 'create_parents': True}
2018-07-02 14:20:00,136 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'] and all its content
2018-07-02 14:20:00,137 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection vertex_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:01,318 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection edge_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:02,466 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection fulltext_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:03,569 - File['/var/lib/ambari-agent/tmp/atlas_hbase_setup.rb'] {'content': Template('atlas_hbase_setup.rb.j2'), 'owner': 'hbase', 'group': 'hadoop'}
2018-07-02 14:20:03,571 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:20:03,571 - File['/usr/hdp/current/atlas-server/conf/hdfs-site.xml'] {'action': ['delete']}
2018-07-02 14:20:03,572 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}, 'fs.defaultFS': {'final': 'true'}}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,579 - Generating config: /usr/hdp/current/atlas-server/conf/core-site.xml
2018-07-02 14:20:03,580 - File['/usr/hdp/current/atlas-server/conf/core-site.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,604 - Directory['/usr/hdp/current/atlas-server/'] {'owner': 'atlas', 'group': 'hadoop', 'recursive_ownership': True}
2018-07-02 14:20:03,673 - Atlas plugin is enabled, configuring Atlas plugin.
2018-07-02 14:20:03,674 - ATLAS: Setup ranger: command retry not enabled thus skipping if ranger admin is down !
2018-07-02 14:20:03,674 - HdfsResource['/ranger/audit'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2018-07-02 14:20:03,679 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/ranger/audit?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpmzxvIn 2>/tmp/tmp2ahzwb''] {'logoutput': None, 'quiet': False}
2018-07-02 14:20:03,761 - call returned (0, '')
2018-07-02 14:20:03,762 - HdfsResource['/ranger/audit/atlas'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0700}
2018-07-02 14:20:03,763 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/ranger/audit/atlas?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpFVOdxQ 2>/tmp/tmpveUg8V''] {'logoutput': None, 'quiet': False}
2018-07-02 14:20:03,809 - call returned (0, '')
2018-07-02 14:20:03,810 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2018-07-02 14:20:03,812 - call['ambari-python-wrap /usr/bin/hdp-select status atlas-server'] {'timeout': 20}
2018-07-02 14:20:03,863 - call returned (0, 'atlas-server - 2.6.4.0-91')
2018-07-02 14:20:03,866 - Skipping Ranger API calls, as policy cache file exists for atlas
2018-07-02 14:20:03,866 - If service name for atlas is not created on Ranger Admin UI, then to re-create it delete policy cache file: /etc/ranger/Sandbox_atlas/policycache/atlas_Sandbox_atlas.json
2018-07-02 14:20:03,868 - File['/usr/hdp/current/atlas-server/conf/ranger-security.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:20:03,868 - Writing File['/usr/hdp/current/atlas-server/conf/ranger-security.xml'] because contents don't match
2018-07-02 14:20:03,869 - Directory['/etc/ranger/Sandbox_atlas'] {'owner': 'atlas', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:20:03,869 - Directory['/etc/ranger/Sandbox_atlas/policycache'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:20:03,870 - File['/etc/ranger/Sandbox_atlas/policycache/atlas_Sandbox_atlas.json'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:20:03,870 - XmlConfig['ranger-atlas-audit.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,878 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-atlas-audit.xml
2018-07-02 14:20:03,878 - File['/usr/hdp/current/atlas-server/conf/ranger-atlas-audit.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,886 - XmlConfig['ranger-atlas-security.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,893 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-atlas-security.xml
2018-07-02 14:20:03,893 - File['/usr/hdp/current/atlas-server/conf/ranger-atlas-security.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,899 - XmlConfig['ranger-policymgr-ssl.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,905 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-policymgr-ssl.xml
2018-07-02 14:20:03,906 - File['/usr/hdp/current/atlas-server/conf/ranger-policymgr-ssl.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,911 - Execute[('/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslKeyStore created successfully!
2018-07-02 14:20:04,946 - Execute[('/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslTrustStore created successfully!
2018-07-02 14:20:05,958 - File['/etc/ranger/Sandbox_atlas/cred.jceks'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0640}
2018-07-02 14:20:05,959 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:20:05,959 - Execute['cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n'] {'tries': 5, 'user': 'hbase', 'try_sleep': 10}
2018-07-02 14:20:21,362 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:20:45,651 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:21:11,541 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:21:37,022 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:22:02,868 - Execute['find /var/log/atlas -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'atlas'}
==> /var/log/atlas/atlas.20180201-102756.err <==
log4j:WARN Continuable parsing error 37 and column 14
log4j:WARN The content of element type "appender" must match "(errorHandler?,param*,rollingPolicy?,triggeringPolicy?,connectionSource?,layout?,filter*,appender-ref*)".
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
==> /var/log/atlas/atlas.20180201-102756.out <==

Command failed after 1 tries
,

stderr: /var/lib/ambari-agent/data/errors-215.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 175, in <module>
    MetadataServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 96, in start
    user=params.hbase_user
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)

stdout: /var/lib/ambari-agent/data/output-215.txt

2018-07-02 14:19:53,284 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,286 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,386 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,387 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,388 - Group['livy'] {}
2018-07-02 14:19:53,389 - Group['spark'] {}
2018-07-02 14:19:53,389 - Group['ranger'] {}
2018-07-02 14:19:53,389 - Group['hdfs'] {}
2018-07-02 14:19:53,389 - Group['zeppelin'] {}
2018-07-02 14:19:53,390 - Group['hadoop'] {}
2018-07-02 14:19:53,390 - Group['users'] {}
2018-07-02 14:19:53,390 - Group['knox'] {}
2018-07-02 14:19:53,390 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,393 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,394 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,395 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,396 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,397 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,397 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,398 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2018-07-02 14:19:53,399 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,399 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2018-07-02 14:19:53,400 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,401 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,402 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2018-07-02 14:19:53,403 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,403 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,404 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-07-02 14:19:53,405 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,406 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,407 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,407 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,408 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,409 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-07-02 14:19:53,409 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,411 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-07-02 14:19:53,433 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-07-02 14:19:53,434 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:19:53,434 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,436 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-07-02 14:19:53,437 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-07-02 14:19:53,461 - call returned (0, '1002')
2018-07-02 14:19:53,461 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-07-02 14:19:53,483 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if
2018-07-02 14:19:53,484 - Group['hdfs'] {}
2018-07-02 14:19:53,484 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2018-07-02 14:19:53,485 - FS Type: 
2018-07-02 14:19:53,485 - Directory['/etc/hadoop'] {'mode': 0755}
2018-07-02 14:19:53,497 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,497 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-07-02 14:19:53,513 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-07-02 14:19:53,537 - Skipping Execute[('setenforce', '0')] due to not_if
2018-07-02 14:19:53,538 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:19:53,540 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-07-02 14:19:53,540 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-07-02 14:19:53,543 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-07-02 14:19:53,545 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-07-02 14:19:53,550 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:53,559 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,560 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-07-02 14:19:53,561 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-07-02 14:19:53,565 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:53,589 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-07-02 14:19:53,899 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,900 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-07-02 14:19:53,901 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:19:53,905 - Directory['/usr/hdp/current/atlas-server/conf'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,908 - Directory['/var/run/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,909 - Directory['/usr/hdp/current/atlas-server/conf/solr'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'mode': 0755, 'owner': 'atlas', 'recursive_ownership': True}
2018-07-02 14:19:53,910 - Directory['/var/log/atlas'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:53,911 - Directory['/usr/hdp/current/atlas-server/data'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2018-07-02 14:19:53,911 - Changing permission for /usr/hdp/current/atlas-server/data from 755 to 644
2018-07-02 14:19:53,912 - Directory['/usr/hdp/current/atlas-server/server/webapp'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0644, 'cd_access': 'a'}
2018-07-02 14:19:53,912 - Changing permission for /usr/hdp/current/atlas-server/server/webapp from 755 to 644
2018-07-02 14:19:53,912 - File['/usr/hdp/current/atlas-server/server/webapp/atlas.war'] {'content': StaticFile('/usr/hdp/current/atlas-server/server/webapp/atlas.war')}
2018-07-02 14:19:57,549 - File['/usr/hdp/current/atlas-server/conf/atlas-log4j.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:57,617 - File['/usr/hdp/current/atlas-server/conf/atlas-env.sh'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0755}
2018-07-02 14:19:57,625 - ModifyPropertiesFile['/usr/hdp/current/atlas-server/conf/users-credentials.properties'] {'owner': 'atlas', 'properties': {'admin': 'ROLE_ADMIN::8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918'}}
2018-07-02 14:19:57,662 - Modifying existing properties file: /usr/hdp/current/atlas-server/conf/users-credentials.properties
2018-07-02 14:19:57,664 - File['/usr/hdp/current/atlas-server/conf/users-credentials.properties'] {'owner': 'atlas', 'content': '#username=group::sha256-password\nadmin=ROLE_ADMIN::8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918\nrangertagsync=RANGER_TAG_SYNC::e3f67240f5117d1753c940dae9eea772d36ed5fe9bd9c94a300e40413f1afb9d\nholger_gov=ROLE_ADMIN::4d20573d20756b4b2cd80e41def04b52907710912b038f0f901d4b568e254fc6\n', 'group': None, 'mode': None, 'encoding': 'utf-8'}
2018-07-02 14:19:57,667 - Execute[('chown', 'atlas:hadoop', '/usr/hdp/current/atlas-server/conf/policy-store.txt')] {'sudo': True}
2018-07-02 14:19:57,845 - Execute[('chmod', '644', '/usr/hdp/current/atlas-server/conf/policy-store.txt')] {'sudo': True}
2018-07-02 14:19:57,918 - Execute[('chown', 'atlas:hadoop', '/usr/hdp/current/atlas-server/conf/users-credentials.properties')] {'sudo': True}
2018-07-02 14:19:57,958 - Execute[('chmod', '644', '/usr/hdp/current/atlas-server/conf/users-credentials.properties')] {'sudo': True}
2018-07-02 14:19:58,015 - File['/usr/hdp/current/atlas-server/conf/solr/solrconfig.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:58,017 - PropertiesFile['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2018-07-02 14:19:58,021 - Generating properties file: /usr/hdp/current/atlas-server/conf/atlas-application.properties
2018-07-02 14:19:58,021 - File['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:19:58,063 - Writing File['/usr/hdp/current/atlas-server/conf/atlas-application.properties'] because contents don't match
2018-07-02 14:19:58,064 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:58,065 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-07-02 14:19:58,066 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2018-07-02 14:19:58,072 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': ..., 'mode': 0644}
2018-07-02 14:19:58,073 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2018-07-02 14:19:58,073 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because contents don't match
2018-07-02 14:19:58,074 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2018-07-02 14:19:58,645 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --check-config --config-set atlas_configs --retry 30 --interval 5'}
2018-07-02 14:19:59,558 - File['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481/solrconfig.xml'] {'content': InlineTemplate(...), 'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:19:59,611 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481 --config-set atlas_configs --retry 30 --interval 5'] {'only_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:20:00,108 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/atlas-server/conf/solr --config-set atlas_configs --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'}
2018-07-02 14:20:00,135 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/atlas-server/conf/solr --config-set atlas_configs --retry 30 --interval 5'] due to not_if
2018-07-02 14:20:00,136 - Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'] {'action': ['delete'], 'create_parents': True}
2018-07-02 14:20:00,136 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_atlas_configs_0.317535207481'] and all its content
2018-07-02 14:20:00,137 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection vertex_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:01,318 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection edge_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:02,466 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox-hdp.hortonworks.com:2181/infra-solr --create-collection --collection fulltext_index --config-set atlas_configs --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
2018-07-02 14:20:03,569 - File['/var/lib/ambari-agent/tmp/atlas_hbase_setup.rb'] {'content': Template('atlas_hbase_setup.rb.j2'), 'owner': 'hbase', 'group': 'hadoop'}
2018-07-02 14:20:03,571 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:20:03,571 - File['/usr/hdp/current/atlas-server/conf/hdfs-site.xml'] {'action': ['delete']}
2018-07-02 14:20:03,572 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}, 'fs.defaultFS': {'final': 'true'}}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,579 - Generating config: /usr/hdp/current/atlas-server/conf/core-site.xml
2018-07-02 14:20:03,580 - File['/usr/hdp/current/atlas-server/conf/core-site.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,604 - Directory['/usr/hdp/current/atlas-server/'] {'owner': 'atlas', 'group': 'hadoop', 'recursive_ownership': True}
2018-07-02 14:20:03,673 - Atlas plugin is enabled, configuring Atlas plugin.
2018-07-02 14:20:03,674 - ATLAS: Setup ranger: command retry not enabled thus skipping if ranger admin is down !
2018-07-02 14:20:03,674 - HdfsResource['/ranger/audit'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2018-07-02 14:20:03,679 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/ranger/audit?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpmzxvIn 2>/tmp/tmp2ahzwb''] {'logoutput': None, 'quiet': False}
2018-07-02 14:20:03,761 - call returned (0, '')
2018-07-02 14:20:03,762 - HdfsResource['/ranger/audit/atlas'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'user': 'hdfs', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'atlas', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0700}
2018-07-02 14:20:03,763 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/ranger/audit/atlas?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpFVOdxQ 2>/tmp/tmpveUg8V''] {'logoutput': None, 'quiet': False}
2018-07-02 14:20:03,809 - call returned (0, '')
2018-07-02 14:20:03,810 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.4.0-91/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://sandbox-hdp.hortonworks.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/apps/falcon', u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2018-07-02 14:20:03,812 - call['ambari-python-wrap /usr/bin/hdp-select status atlas-server'] {'timeout': 20}
2018-07-02 14:20:03,863 - call returned (0, 'atlas-server - 2.6.4.0-91')
2018-07-02 14:20:03,866 - Skipping Ranger API calls, as policy cache file exists for atlas
2018-07-02 14:20:03,866 - If service name for atlas is not created on Ranger Admin UI, then to re-create it delete policy cache file: /etc/ranger/Sandbox_atlas/policycache/atlas_Sandbox_atlas.json
2018-07-02 14:20:03,868 - File['/usr/hdp/current/atlas-server/conf/ranger-security.xml'] {'content': InlineTemplate(...), 'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:20:03,868 - Writing File['/usr/hdp/current/atlas-server/conf/ranger-security.xml'] because contents don't match
2018-07-02 14:20:03,869 - Directory['/etc/ranger/Sandbox_atlas'] {'owner': 'atlas', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:20:03,869 - Directory['/etc/ranger/Sandbox_atlas/policycache'] {'owner': 'atlas', 'group': 'hadoop', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-07-02 14:20:03,870 - File['/etc/ranger/Sandbox_atlas/policycache/atlas_Sandbox_atlas.json'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0644}
2018-07-02 14:20:03,870 - XmlConfig['ranger-atlas-audit.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,878 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-atlas-audit.xml
2018-07-02 14:20:03,878 - File['/usr/hdp/current/atlas-server/conf/ranger-atlas-audit.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,886 - XmlConfig['ranger-atlas-security.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,893 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-atlas-security.xml
2018-07-02 14:20:03,893 - File['/usr/hdp/current/atlas-server/conf/ranger-atlas-security.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,899 - XmlConfig['ranger-policymgr-ssl.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/atlas-server/conf', 'mode': 0744, 'configuration_attributes': {}, 'owner': 'atlas', 'configurations': ...}
2018-07-02 14:20:03,905 - Generating config: /usr/hdp/current/atlas-server/conf/ranger-policymgr-ssl.xml
2018-07-02 14:20:03,906 - File['/usr/hdp/current/atlas-server/conf/ranger-policymgr-ssl.xml'] {'owner': 'atlas', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744, 'encoding': 'UTF-8'}
2018-07-02 14:20:03,911 - Execute[('/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslKeyStore created successfully!
2018-07-02 14:20:04,946 - Execute[('/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/ranger_credential_helper.py', '-l', '/usr/hdp/2.6.4.0-91/ranger-atlas-plugin/install/lib/*', '-f', '/etc/ranger/Sandbox_atlas/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED], '-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
Using Java:/usr/lib/jvm/java/bin/java
Alias sslTrustStore created successfully!
2018-07-02 14:20:05,958 - File['/etc/ranger/Sandbox_atlas/cred.jceks'] {'owner': 'atlas', 'group': 'hadoop', 'mode': 0640}
2018-07-02 14:20:05,959 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.4.0-91 -> 2.6.4.0-91
2018-07-02 14:20:05,959 - Execute['cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n'] {'tries': 5, 'user': 'hbase', 'try_sleep': 10}
2018-07-02 14:20:21,362 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:20:45,651 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:21:11,541 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:21:37,022 - Retrying after 10 seconds. Reason: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
2018-07-02 14:22:02,868 - Execute['find /var/log/atlas -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'atlas'}
==> /var/log/atlas/atlas.20180201-102756.err <==
log4j:WARN Continuable parsing error 37 and column 14
log4j:WARN The content of element type "appender" must match "(errorHandler?,param*,rollingPolicy?,triggeringPolicy?,connectionSource?,layout?,filter*,appender-ref*)".
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
==> /var/log/atlas/atlas.20180201-102756.out <==
 

Command failed after 1 tries

Super Guru

Are you running on your laptop? if so I had similar issue. It was all due to resources. Shut down all unnecessary services to provide Atlas resources. Once I did, all came up

New Contributor

No i am running in the pc having configuration 32 GB RAM...

Expert Contributor

Thanks @sunile.manjee your advise is helpful for me.

Expert Contributor

Can you please post logs from /var/log/atlas/appplication.log, if this file is empty, please see contents of *.err, start-up errors because of resource constraints, they will be logged in the .err file.

New Contributor

please find the below doc found in /var/log/atlas/atlas.20180201-102756.err

log4j:WARN Continuable parsing error 37 and column 14 log4j:WARN The content of element type "appender" must match "(errorHandler?,param*,rollingPolicy?,triggeringPolicy?,connectionSource?,layout?,filter*,appender-ref*)". log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. ~

Mentor

@M Sainadh

Can you share the contents of /var/log/atlas/atlas.20180201-102756.err. Atlas need to access these 2 hbase tables atlas_titan, ATLAS_ENTITY_AUDIT_EVENTS, so you need to has hbase up and running .

To check that the tables were created do the following

$ cd /usr/hdp/current/zookeeper-client/bin/

Once connected to zookeeper look for the secure hbase as you have kerberos enabled

[zk: localhost:2181(CONNECTED) 1] ls /hbase-secure/table
[ATLAS_ENTITY_AUDIT_EVENTS, hbase:meta, atlas_titan, hbase:namespace, hbase:acl]

Above you see the tables if the 2 tables weren't created please use this How to manually create tables for Atlas in HBase to create the ATLAS_ENTITY_AUDIT_EVENTS and atlas_titan hbase tables
HTH

How much RAM did you allocate to your sandbox?

New Contributor

its 8gb...past few days it was working fine with the same amount of ram...

I am curious, how about solr, does it need to be up and running before Atlas comes up? Or only Hbase?

@M Sainadh

You'd need Ambari Infra , Hbase and Kafka up and running to be able to bring Atlas service up and running.

ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

This says that the hbase not in healthy state. Do check if all the Region servers are up and running. If everything is good we'd need to take a look at hbase master logs and whats wrong there.

; ;