Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Solr for Ranger-Admin does not start after changing the JDK from 7 to 8 on Sandbox manage by Docker

avatar
Contributor

Hello all,

I am a relatively new user to Hadoop and try to model a stream implying Kafka => Spark Streaming => Hbase.

I am coded it with Scala (2.10.6) and some part are in Java 8.

For that code to be executed I needed to upgrade the Sandbox Ambari from Java 7 to Java 8. I used the documented procedure:

[root@sandbox ~]# ambari-server setup
Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)? n
Adjusting ambari-server permissions and ownership...
Checking firewall status...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)? y
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[3] Custom JDK
==============================================================================
Enter choice (1): 3
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /opt/jdk1.8.0_102
Validating JDK on Ambari Server...done.
Completing setup...
[Configuring database...
Enter advanced database configuration [y/n] (n)? n
input not recognized, please try again: 
Enter advanced database configuration [y/n] (n)? n
Configuring database...
Default properties detected. Using built-in database.
Configuring ambari database...
Checking PostgreSQL...
Configuring local database...
Connecting to local database...done.
Configuring PostgreSQL...
Backup for pg_hba found, reconfiguration not required
Extracting system views...
............
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.

The documentation advise to shutdown every service after that and relaunch them one after another. So under the ambari interface I stop-all and start-all the services.

Now come the problem, Ranger-Admin does not want to start. The reason, if I understand the log is a failure to start the Solr Cloud server it uses for logging reason (from the internet...). I could not figure out why Solr is suddenly incapable of starting this server.

I add that stopping the Docker container abruptly (I could not find in the Horton Documentation a precise procedure to stop the docker container and some simple sudo docker stop sandbox or sudo docker kill sandbox seem to require infinite waiting time ..) does not solve the problem as it restart (with HDFS in Safe mode) without taking the change into account.

Does anybody have an idea about how to solve this problem ?

Below the stack I get from the ambari starting operation interface:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 208, in <module>
    RangerAdmin().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 100, in start
    setup_ranger_audit_solr()
  File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 590, in setup_ranger_audit_solr
    jaas_file = params.solr_jaas_file)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 116, in create_collection
    Execute(create_collection_cmd)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=sandbox.hortonworks.com
Client environment:java.version=1.8.0_102
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/opt/jdk1.8.0_102/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.4.0.0.1225.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=4.4.0-38-generic
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@6996db8
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x157676b6dcf0004, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@3800b686 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Using default ZkCredentialsProvider
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@71f2a7d5
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Waiting for client to connect to ZooKeeper
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x157676b6dcf0005, negotiated timeout = 10000
Watcher org.apache.solr.common.cloud.ConnectionManager@481a438a name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Updating cluster state from ZooKeeper... 
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 1)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 2)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 3)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 4)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
    at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
    at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
    at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
    at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
    at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
    at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 5)
usage:
./solrCloudCli.sh --create-collection -z host1:2181,host2:2181/ambari-solr -c collection -cs conf_set
./solrCloudCli.sh --upload-config -z host1:2181,host2:2181/ambari-solr -d /tmp/myconfig_dir -cs config_set
./solrCloudCli.sh --download-config -z host1:2181,host2:2181/ambari-solr -cs config_set -d /tmp/myonfig_dir
./solrCloudCli.sh --check-config -z host1:2181,host2:2181/ambari-solr -cs config_set
./solrCloudCli.sh --create-shard -z host1:2181,host2:2181/ambari-solr -c collection -sn myshard
./solrCloudCli.sh --create-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --check-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --cluster-prop -z host1:2181,host2:2181/ambari-solr -cpn urlScheme -cpn http
./solrCloudCli.sh --create-sasl-users -z host1:2181,host2:2181 -zn /ambari-solr -csu logsearch,atlas,ranger
./solrCloudCli.sh --setup-kerberos -z host1:2181,host2:2181 --secure -zn /ambari-solr-secure -cfz /ambari-solr-unsecure -jf /etc/path/my_jaas.conf
./solrCloudCli.sh --setup-kerberos-plugin -z host1:2181,host2:2181 -zn /ambari-solr
 -c,--collection <collection name>                                          Collection name
 -cc,--create-collection                                                    Create collection in Solr (command)
 -cfz,--copy-from-znode </ambari-solr-secure>                               Copy-from-znode
 -chc,--check-config                                                        Check configuration exists in Zookeeper (command)
 -chz,--check-znode                                                         Check znode exists in Zookeeper (command)
 -cp,--cluster-prop                                                         Set cluster property (command)
 -cpn,--property-name <cluster prop name>                                   Cluster property name
 -cpv,--property-value <cluster prop value>                                 Cluster property value
 -cs,--config-set <config_set>                                              Configuration set
 -csh,--create-shard                                                        Create shard in Solr (command)
 -csu,--create-sasl-users                                                   Create sasl users
 -cz,--create-znode                                                         Create Znode (command)
 -d,--config-dir <config_dir>                                               Configuration directory
 -dc,--download-config                                                      Download configuration set from Zookeeper (command)
 -h,--help                                                                  Print commands
 -i,--interval <interval>                                                   Interval for retry logic in sec [default:5]
 -jf,--jaas-file <jaas_file>                                                Location of the jaas-file to communicate with kerberized Solr
 -ksl,--key-store-location <key store location>                             Location of the key store used to communicate with Solr using SSL
 -ksp,--key-store-password <key store password>                             Key store password used to communicate with Solr using SSL
 -kst,--key-store-type <key store type>                                     Type of the key store used to communicate with Solr using SSL
 -m,--max-shards <max number of shards>                                     Max number of shards per node (default: replication * shards)
 -ns,--no-sharding                                                          Sharding not used when creating collection
 -r,--replication <replication factor>                                      Replication factor
 -rf,--router-field <router_field>                                          Router field for collection [default:_router_field_]
 -rn,--router-name <router_name>                                            Router name for collection [default:implicit]
 -rt,--retry <number of retries>                                            Number of retries for access Solr [default:10]
 -s,--shards <shard number>                                                 Number of shards
 -sec,--secure                                                              Flag for enable/disable kerberos (with --setup-kerberos or --setup-kerberos-plugin)
 -sk,--setup-kerberos                                                       Setup kerberos (command)
 -skp,--setup-kerberos-plugin                                               Setup kerberos plugin in security.json (command)
 -sn,--shard-name <my_new_shard>                                            Name of the shard for create-shard command
 -su,--sasl-users <atlas,ranger,logsearch-solr>                             Sasl users (comma separated list)
 -tsl,--trust-store-location <trust store location>                         Location of the trust store used to communicate with Solr using SSL
 -tsp,--trust-store-password <trust store password>                         Trust store password used to communicate with Solr using SSL
 -tst,--trust-store-type <trust store type>                                 Type of the trust store used to communicate with Solr using SSL
 -uc,--upload-config                                                        Upload configuration set to Zookeeper (command)
 -z,--zookeeper-connect-string <host:port,host:port[/ambari-solr]>          Zookeeper quorum [and Znode (optional)]
 -zn,--znode </ambari-solr>                                                 Zookeeper ZNode
Maximum retries exceeded: 5
Maximum retries exceeded: 5
Return code: 1
 stdout:
2016-09-26 16:54:38,163 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-09-26 16:54:38,163 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-09-26 16:54:38,164 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-09-26 16:54:38,184 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-09-26 16:54:38,184 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-09-26 16:54:38,206 - checked_call returned (0, '')
2016-09-26 16:54:38,206 - Ensuring that hadoop has the correct symlink structure
2016-09-26 16:54:38,206 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-26 16:54:38,299 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-09-26 16:54:38,299 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-09-26 16:54:38,300 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-09-26 16:54:38,322 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-09-26 16:54:38,323 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-09-26 16:54:38,344 - checked_call returned (0, '')
2016-09-26 16:54:38,345 - Ensuring that hadoop has the correct symlink structure
2016-09-26 16:54:38,345 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-26 16:54:38,346 - Group['livy'] {}
2016-09-26 16:54:38,347 - Group['spark'] {}
2016-09-26 16:54:38,347 - Group['ranger'] {}
2016-09-26 16:54:38,347 - Group['zeppelin'] {}
2016-09-26 16:54:38,347 - Group['hadoop'] {}
2016-09-26 16:54:38,347 - Group['users'] {}
2016-09-26 16:54:38,348 - Group['knox'] {}
2016-09-26 16:54:38,348 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,349 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,349 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,350 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,350 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,351 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,351 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,352 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,352 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-09-26 16:54:38,353 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,353 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,354 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,354 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,355 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,355 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,356 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,356 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,357 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,357 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,358 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,358 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,359 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,359 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,360 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-26 16:54:38,361 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-09-26 16:54:38,369 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-09-26 16:54:38,369 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-09-26 16:54:38,370 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-26 16:54:38,371 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-09-26 16:54:38,378 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-09-26 16:54:38,379 - Group['hdfs'] {}
2016-09-26 16:54:38,379 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-09-26 16:54:38,379 - FS Type: 
2016-09-26 16:54:38,380 - Directory['/etc/hadoop'] {'mode': 0755}
2016-09-26 16:54:38,393 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,393 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-09-26 16:54:38,405 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-09-26 16:54:38,414 - Skipping Execute[('setenforce', '0')] due to not_if
2016-09-26 16:54:38,414 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-09-26 16:54:38,416 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-09-26 16:54:38,416 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-09-26 16:54:38,421 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-09-26 16:54:38,423 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-09-26 16:54:38,423 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-09-26 16:54:38,433 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,434 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-09-26 16:54:38,434 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,438 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-09-26 16:54:38,445 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-09-26 16:54:38,586 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-09-26 16:54:38,590 - Directory['/usr/hdp/current/ranger-admin/conf'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True}
2016-09-26 16:54:38,592 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar'), 'mode': 0644}
2016-09-26 16:54:38,592 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar, because /var/lib/ambari-agent/tmp/mysql-connector-java.jar already exists
2016-09-26 16:54:38,607 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2016-09-26 16:54:38,649 - File['/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'] {'mode': 0644}
2016-09-26 16:54:38,652 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...}
2016-09-26 16:54:38,671 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-09-26 16:54:38,702 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-09-26 16:54:38,703 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-09-26 16:54:38,704 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': '/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'}}
2016-09-26 16:54:38,704 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-09-26 16:54:38,706 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-09-26 16:54:38,706 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-09-26 16:54:38,706 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-09-26 16:54:38,707 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-09-26 16:54:38,707 - Execute['/opt/jdk1.8.0_102//bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/ews/lib/* org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://localhost:3306/ranger' rangeradmin [PROTECTED] com.mysql.jdbc.Driver'] {'environment': {}, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2016-09-26 16:54:39,030 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] {'not_if': 'ls /usr/hdp/current/ranger-admin/conf', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf'}
2016-09-26 16:54:39,037 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] due to not_if
2016-09-26 16:54:39,037 - Directory['/usr/hdp/current/ranger-admin/'] {'owner': 'ranger', 'group': 'ranger', 'recursive_ownership': True}
2016-09-26 16:54:39,171 - Directory['/var/run/ranger'] {'owner': 'ranger', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:39,173 - Directory['/var/log/ranger/admin'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:39,175 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env-logdir.sh'] {'owner': 'ranger', 'content': 'export RANGER_ADMIN_LOG_DIR=/var/log/ranger/admin', 'group': 'ranger', 'mode': 0755}
2016-09-26 16:54:39,175 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-default-site.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-09-26 16:54:39,176 - File['/usr/hdp/current/ranger-admin/conf/security-applicationContext.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-09-26 16:54:39,177 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] {'not_if': 'ls /usr/bin/ranger-admin', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh'}
2016-09-26 16:54:39,187 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] due to not_if
2016-09-26 16:54:39,188 - XmlConfig['ranger-admin-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'ranger', 'configurations': ...}
2016-09-26 16:54:39,196 - Generating config: /usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml
2016-09-26 16:54:39,196 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-26 16:54:39,259 - Directory['/usr/hdp/current/ranger-admin/conf/ranger_jaas'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0700}
2016-09-26 16:54:39,259 - File['/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.properties'] {'content': ..., 'owner': 'ranger', 'group': 'ranger', 'mode': 0644}
2016-09-26 16:54:39,260 - Execute[('/opt/jdk1.8.0_102//bin/java', '-cp', '/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', 'rangeradmin', '-value', [PROTECTED], '-provider', 'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': '/opt/jdk1.8.0_102/'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Sep 26, 2016 4:54:40 PM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Alias already exist!! will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: rangeradmin from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
rangeradmin has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
rangeradmin has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2016-09-26 16:54:40,535 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2016-09-26 16:54:40,535 - XmlConfig['core-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}, 'fs.defaultFS': {'final': 'true'}}, 'owner': 'ranger', 'configurations': ...}
2016-09-26 16:54:40,543 - Generating config: /usr/hdp/current/ranger-admin/conf/core-site.xml
2016-09-26 16:54:40,543 - File['/usr/hdp/current/ranger-admin/conf/core-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-26 16:54:40,563 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:40,629 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:40,631 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2016-09-26 16:54:40,691 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': InlineTemplate(...), 'mode': 0644}
2016-09-26 16:54:40,714 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2016-09-26 16:54:40,763 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because contents don't match
2016-09-26 16:54:40,766 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2016-09-26 16:54:42,118 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667 --config-set ranger_audits --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --check-config --config-set ranger_audits --retry 30 --interval 5'}
2016-09-26 16:54:43,492 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'}
2016-09-26 16:54:43,521 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] due to not_if
2016-09-26 16:54:43,522 - Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'] {'action': ['delete'], 'create_parents': True}
2016-09-26 16:54:43,523 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'] and all its content
2016-09-26 16:54:43,526 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}

Command failed after 1 tries

Many thanks to those that can provide any help in the matter.

1 ACCEPTED SOLUTION

avatar
Contributor

Problem solved by: restarting Ambari Infra before Ranger-Admin.

@Ayub Pathan

It is exactly the same command that do not work for me (see my preceding post) except that you obtain the container name by another way.

$ sudo docker images

may give you the name of the container plus id and other informations for instance. For precise definition of all the command see for instance (the stop command):

https://docs.docker.com/engine/reference/commandline/stop/#stop

My problem is that using docker stop/kill takes infinitely many time or does not end.

It is another problem that the starting one so I will open another post not to confuse users maybe on docker forum.

Thanks for helping me until now.

View solution in original post

5 REPLIES 5

avatar
Expert Contributor

@samuel sayag Is Ambari infra service installed/started?

avatar
@samuel sayag

I think while you stopped and started all services(or docker stop/start), something went wrong in the order of bringing services up. Your understanding is correct, solr instance is not up(could be possible due to ordering issue) while ranger-admin is brought up.

To fix this issue, can you try bringing Ambari-Infra component(this hosts solr instance as well) first and once it is up completely(no alerts, everything green), try to bring up ranger service(ranger-admin). Let me know if this solves the issue.

avatar
Contributor

Hello thanks for answering,

@Ayub Pathan

Indeed I solves the problem of starting Ranger-Admin.

The problem as reduce to how to stop correctly docker in order not to confuse him. I recall that:

$ sudo docker stop sandbox

$ sudo docker kill sandbox

do not have any effect on the container (at least after a half an hour ).

Many thanks until now, I can eventually test my development.

avatar

@samuel sayag Glad that you were able to solve ranger start problem.

Regarding container stop/kill.. try below commands, this should help.

docker stop $(docker ps -a -q --filter="name=<containerName>")
docker rm $(docker ps -a -q --filter="name=<containerName>")

Refer to this if you need more help.

avatar
Contributor

Problem solved by: restarting Ambari Infra before Ranger-Admin.

@Ayub Pathan

It is exactly the same command that do not work for me (see my preceding post) except that you obtain the container name by another way.

$ sudo docker images

may give you the name of the container plus id and other informations for instance. For precise definition of all the command see for instance (the stop command):

https://docs.docker.com/engine/reference/commandline/stop/#stop

My problem is that using docker stop/kill takes infinitely many time or does not end.

It is another problem that the starting one so I will open another post not to confuse users maybe on docker forum.

Thanks for helping me until now.