Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ranger problems

avatar

this is my sandbox:

Sandbox information: Created on: 25_10_2016_08_11_26 for Hadoop stack version: Hadoop 2.7.3.2.5.0.0-1245 Ambari Version: 2.4.0.0-1225 Ambari Hash: 59175b7aa1ddb74b85551c632e3ce42fed8f0c85 Ambari build: Release : 1225 Java version: 1.8.0_111 OS Version: CentOS release 6.8 (Final)

I have installed sandbox, ranger doesn't work. if i try "Test Connection" in ambari-ranger panel it fail!!! the error is

/usr/bin/python: can't open file '/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py': [Errno 2] No such file or directory

directory /var/lib/ambari-agent/cache/custom_actions/scripts exists in the sandbox file system but it is empty.

In this way I cannot restart many services...

How can I solve?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

@pierluigi francischelli

The script should come from /var/lib/ambari-server/resources/custom_actions/scripts/check_host.py this location.

If you see the script in the above location,

Stop ambari agent, mv /var/lib/ambari-agent/cache /var/lib/ambari-agent/cache_OLD and start ambari agent.

This will get ambari agent to download the files from ambari server location. It is safe to do this.

View solution in original post

6 REPLIES 6

avatar
Expert Contributor

@pierluigi francischelli

The script should come from /var/lib/ambari-server/resources/custom_actions/scripts/check_host.py this location.

If you see the script in the above location,

Stop ambari agent, mv /var/lib/ambari-agent/cache /var/lib/ambari-agent/cache_OLD and start ambari agent.

This will get ambari agent to download the files from ambari server location. It is safe to do this.

avatar

I cannot delete /var/lib/ambari-agent/cache.

mv /var/lib/ambari-agent/cache /var/lib/ambari-agent/cache_OLD copy in cache_OLD every file but it doesn't remove cache directory

if I try

rm -rf cache

it doesn't work it gives me an error 'invalid argument'

avatar
Super Collaborator

@pierluigi francischelli, can you please post the logs when you try to restart Ranger?

avatar

@Mushtaq Rizvi

If I try to start ranger admin this is the Stderr

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 208, in <module>
    RangerAdmin().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 100, in start
    setup_ranger_audit_solr()
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 590, in setup_ranger_audit_solr
    jaas_file = params.solr_jaas_file)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 116, in create_collection
    Execute(create_collection_cmd)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=sandbox.hortonworks.com
Client environment:java.version=1.8.0_111
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-0.b15.el6_8.x86_64/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.4.0.0.1225.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=3.10.0-327.el7.x86_64
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@1963006a
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x1589122ac490007, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@107ed120 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Using default ZkCredentialsProvider
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@5474c6c
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x1589122ac490008, negotiated timeout = 10000
Watcher org.apache.solr.common.cloud.ConnectionManager@7453d348 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Updating cluster state from ZooKeeper... 
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 1)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 2)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 3)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 4)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
	at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
	at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
	at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
	at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
	at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
	at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 5)
usage:
./solrCloudCli.sh --create-collection -z host1:2181,host2:2181/ambari-solr -c collection -cs conf_set
./solrCloudCli.sh --upload-config -z host1:2181,host2:2181/ambari-solr -d /tmp/myconfig_dir -cs config_set
./solrCloudCli.sh --download-config -z host1:2181,host2:2181/ambari-solr -cs config_set -d /tmp/myonfig_dir
./solrCloudCli.sh --check-config -z host1:2181,host2:2181/ambari-solr -cs config_set
./solrCloudCli.sh --create-shard -z host1:2181,host2:2181/ambari-solr -c collection -sn myshard
./solrCloudCli.sh --create-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --check-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --cluster-prop -z host1:2181,host2:2181/ambari-solr -cpn urlScheme -cpn http
./solrCloudCli.sh --create-sasl-users -z host1:2181,host2:2181 -zn /ambari-solr -csu logsearch,atlas,ranger
./solrCloudCli.sh --setup-kerberos -z host1:2181,host2:2181 --secure -zn /ambari-solr-secure -cfz /ambari-solr-unsecure -jf /etc/path/my_jaas.conf
./solrCloudCli.sh --setup-kerberos-plugin -z host1:2181,host2:2181 -zn /ambari-solr
 -c,--collection <collection name>                                          Collection name
 -cc,--create-collection                                                    Create collection in Solr (command)
 -cfz,--copy-from-znode </ambari-solr-secure>                               Copy-from-znode
 -chc,--check-config                                                        Check configuration exists in Zookeeper (command)
 -chz,--check-znode                                                         Check znode exists in Zookeeper (command)
 -cp,--cluster-prop                                                         Set cluster property (command)
 -cpn,--property-name <cluster prop name>                                   Cluster property name
 -cpv,--property-value <cluster prop value>                                 Cluster property value
 -cs,--config-set <config_set>                                              Configuration set
 -csh,--create-shard                                                        Create shard in Solr (command)
 -csu,--create-sasl-users                                                   Create sasl users
 -cz,--create-znode                                                         Create Znode (command)
 -d,--config-dir <config_dir>                                               Configuration directory
 -dc,--download-config                                                      Download configuration set from Zookeeper (command)
 -h,--help                                                                  Print commands
 -i,--interval <interval>                                                   Interval for retry logic in sec [default:5]
 -jf,--jaas-file <jaas_file>                                                Location of the jaas-file to communicate with kerberized Solr
 -ksl,--key-store-location <key store location>                             Location of the key store used to communicate with Solr using SSL
 -ksp,--key-store-password <key store password>                             Key store password used to communicate with Solr using SSL
 -kst,--key-store-type <key store type>                                     Type of the key store used to communicate with Solr using SSL
 -m,--max-shards <max number of shards>                                     Max number of shards per node (default: replication * shards)
 -ns,--no-sharding                                                          Sharding not used when creating collection
 -r,--replication <replication factor>                                      Replication factor
 -rf,--router-field <router_field>                                          Router field for collection [default:_router_field_]
 -rn,--router-name <router_name>                                            Router name for collection [default:implicit]
 -rt,--retry <number of retries>                                            Number of retries for access Solr [default:10]
 -s,--shards <shard number>                                                 Number of shards
 -sec,--secure                                                              Flag for enable/disable kerberos (with --setup-kerberos or --setup-kerberos-plugin)
 -sk,--setup-kerberos                                                       Setup kerberos (command)
 -skp,--setup-kerberos-plugin                                               Setup kerberos plugin in security.json (command)
 -sn,--shard-name <my_new_shard>                                            Name of the shard for create-shard command
 -su,--sasl-users <atlas,ranger,logsearch-solr>                             Sasl users (comma separated list)
 -tsl,--trust-store-location <trust store location>                         Location of the trust store used to communicate with Solr using SSL
 -tsp,--trust-store-password <trust store password>                         Trust store password used to communicate with Solr using SSL
 -tst,--trust-store-type <trust store type>                                 Type of the trust store used to communicate with Solr using SSL
 -uc,--upload-config                                                        Upload configuration set to Zookeeper (command)
 -z,--zookeeper-connect-string <host:port,host:port[/ambari-solr]>          Zookeeper quorum [and Znode (optional)]
 -zn,--znode </ambari-solr>                                                 Zookeeper ZNode
Maximum retries exceeded: 5
Maximum retries exceeded: 5
Return code: 1<br>

and this is stdout:

2016-11-23 13:54:28,266 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 13:54:28,266 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 13:54:28,267 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 13:54:28,301 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 13:54:28,301 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 13:54:28,325 - checked_call returned (0, '')
2016-11-23 13:54:28,326 - Ensuring that hadoop has the correct symlink structure
2016-11-23 13:54:28,326 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 13:54:28,446 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-23 13:54:28,446 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-23 13:54:28,447 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-23 13:54:28,472 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-23 13:54:28,472 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-23 13:54:28,497 - checked_call returned (0, '')
2016-11-23 13:54:28,497 - Ensuring that hadoop has the correct symlink structure
2016-11-23 13:54:28,497 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-23 13:54:28,499 - Group['livy'] {}
2016-11-23 13:54:28,516 - Group['spark'] {}
2016-11-23 13:54:28,516 - Group['ranger'] {}
2016-11-23 13:54:28,517 - Group['zeppelin'] {}
2016-11-23 13:54:28,517 - Group['hadoop'] {}
2016-11-23 13:54:28,517 - Group['users'] {}
2016-11-23 13:54:28,517 - Group['knox'] {}
2016-11-23 13:54:28,517 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,518 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,519 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,519 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,520 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 13:54:28,520 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,521 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,522 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 13:54:28,522 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-11-23 13:54:28,523 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 13:54:28,523 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,524 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,524 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,525 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-23 13:54:28,525 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,526 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,526 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,527 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,527 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,528 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,528 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,529 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,530 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-23 13:54:28,530 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-23 13:54:28,539 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-23 13:54:28,548 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-23 13:54:28,549 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-23 13:54:28,550 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-23 13:54:28,551 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-23 13:54:28,559 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-23 13:54:28,560 - Group['hdfs'] {}
2016-11-23 13:54:28,560 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-11-23 13:54:28,561 - FS Type: 
2016-11-23 13:54:28,561 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-23 13:54:28,575 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 13:54:28,576 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-23 13:54:28,588 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-23 13:54:28,623 - Skipping Execute[('setenforce', '0')] due to not_if
2016-11-23 13:54:28,623 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-23 13:54:28,625 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-23 13:54:28,625 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-23 13:54:28,631 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-11-23 13:54:28,633 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-11-23 13:54:28,639 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-23 13:54:28,651 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 13:54:28,651 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-23 13:54:28,653 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-23 13:54:28,658 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-23 13:54:28,666 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-23 13:54:28,840 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-23 13:54:28,845 - Directory['/usr/hdp/current/ranger-admin/conf'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True}
2016-11-23 13:54:28,847 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar'), 'mode': 0644}
2016-11-23 13:54:28,847 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar, because /var/lib/ambari-agent/tmp/mysql-connector-java.jar already exists
2016-11-23 13:54:28,854 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2016-11-23 13:54:28,880 - File['/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'] {'mode': 0644}
2016-11-23 13:54:28,880 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...}
2016-11-23 13:54:28,894 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-11-23 13:54:28,906 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-11-23 13:54:28,906 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-11-23 13:54:28,907 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': '/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'}}
2016-11-23 13:54:28,908 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-11-23 13:54:28,909 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-11-23 13:54:28,909 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-11-23 13:54:28,909 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-11-23 13:54:28,910 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-11-23 13:54:28,923 - Execute['/usr/lib/jvm/java/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/ews/lib/* org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://localhost:3306/ranger' rangeradmin [PROTECTED] com.mysql.jdbc.Driver'] {'environment': {}, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2016-11-23 13:54:29,486 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] {'not_if': 'ls /usr/hdp/current/ranger-admin/conf', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf'}
2016-11-23 13:54:29,495 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] due to not_if
2016-11-23 13:54:29,495 - Directory['/usr/hdp/current/ranger-admin/'] {'owner': 'ranger', 'group': 'ranger', 'recursive_ownership': True}
2016-11-23 13:54:32,805 - Directory['/var/run/ranger'] {'owner': 'ranger', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-23 13:54:32,806 - Directory['/var/log/ranger/admin'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-23 13:54:32,807 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env-logdir.sh'] {'owner': 'ranger', 'content': 'export RANGER_ADMIN_LOG_DIR=/var/log/ranger/admin', 'group': 'ranger', 'mode': 0755}
2016-11-23 13:54:32,819 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-default-site.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-11-23 13:54:32,819 - File['/usr/hdp/current/ranger-admin/conf/security-applicationContext.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-11-23 13:54:32,820 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] {'not_if': 'ls /usr/bin/ranger-admin', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh'}
2016-11-23 13:54:32,828 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] due to not_if
2016-11-23 13:54:32,829 - XmlConfig['ranger-admin-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'ranger', 'configurations': ...}
2016-11-23 13:54:32,864 - Generating config: /usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml
2016-11-23 13:54:32,864 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-11-23 13:54:32,907 - Directory['/usr/hdp/current/ranger-admin/conf/ranger_jaas'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0700}
2016-11-23 13:54:32,907 - File['/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.properties'] {'content': ..., 'owner': 'ranger', 'group': 'ranger', 'mode': 0644}
2016-11-23 13:54:32,908 - Execute[('/usr/lib/jvm/java/bin/java', '-cp', '/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', 'rangeradmin', '-value', [PROTECTED], '-provider', 'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/lib/jvm/java'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Nov 23, 2016 1:54:33 PM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Alias already exist!! will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: rangeradmin from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
rangeradmin has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
rangeradmin has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2016-11-23 13:54:34,412 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2016-11-23 13:54:34,414 - XmlConfig['core-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}, 'fs.defaultFS': {'final': 'true'}}, 'owner': 'ranger', 'configurations': ...}
2016-11-23 13:54:34,433 - Generating config: /usr/hdp/current/ranger-admin/conf/core-site.xml
2016-11-23 13:54:34,433 - File['/usr/hdp/current/ranger-admin/conf/core-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-11-23 13:54:34,461 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-23 13:54:34,462 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-23 13:54:34,462 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2016-11-23 13:54:34,611 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': InlineTemplate(...), 'mode': 0644}
2016-11-23 13:54:34,612 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2016-11-23 13:54:34,612 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because it doesn't exist
2016-11-23 13:54:34,613 - Changing permission for /var/log/ambari-infra-solr-client/solr-client.log from 644 to 664
2016-11-23 13:54:34,613 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2016-11-23 13:54:35,670 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.366613500424 --config-set ranger_audits --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --check-config --config-set ranger_audits --retry 30 --interval 5'}
2016-11-23 13:54:37,084 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.366613500424'}
2016-11-23 13:54:37,100 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] due to not_if
2016-11-23 13:54:37,101 - Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.366613500424'] {'action': ['delete'], 'create_parents': True}
2016-11-23 13:54:37,101 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.366613500424'] and all its content
2016-11-23 13:54:37,103 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}

Command failed after 1 tries

check connectivity:

I have understood that at the beginning (after VM installation) every .py script under

/var/lib/ambari-server/resources/custom_actions/scripts is OK.

After, when I try to check connectivity under ranger config panel (ambari), it gives me

" failed /var/lib/ambari-server/resources/custom_actions/scripts/check_host.py not found"

if I check in

/var/lib/ambari-server/resources/custom_actions/scripts/ it is empty!!!

thx

avatar
Super Collaborator

@pierluigi francischelli , this log file states that there is no live Solr servers available. Can you please start Ambari Infra first and then start Ranger? Let us know if that works.

avatar

@Mushtaq Rizvi

Everything is OK.

thanks a lot