Member since
09-08-2016
21
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
691 | 09-27-2016 07:18 AM |
02-09-2017
05:52 AM
I tried both methods and just the first one with %spark.dep or %dep is working. Not the latter you signaled, however much more interesting for me of course. For the latter method I declared a repository with ID: "hortonwork" and URL: "http://repo.hortonworks.com/content/groups/public/" I edited the spark interpreter and added this line to dependency: com.hortonworks:shc-core:1.0.1-1.6-s_2.10 (I checked manually that the jar is indeed here.)
... View more
02-08-2017
01:34 PM
It works ! Thanks. Do you have any idea about the cause however ?
... View more
02-08-2017
12:27 PM
Hello, I am working on a cluster with an installation of HDP-2.5. I am trying to load dependency in zeppelin using the traditional way (I tried to register the repo in zeppelin and to add the dependency in the spark interpreter but it is not working better than this method so...) and to execute some code at the interpreter level. The execution of the div in order to include the dependency and the execution of the code is systematically failing. I am left with a non working zeppelin. Note: I reinstalled it but it did not change anything. Note2: zeppelin logs are available in attachment.
... View more
Labels:
02-07-2017
12:01 PM
Thanks for your advice it seems this is the problem. As a test I ran the example of the shc connector here
with --master yarn-cluster and --master yarn-client and this was the
problem. The quorum are respectively found/not found in each test. So
spark doest not have the file in its path when working as a client.
... View more
02-07-2017
12:00 PM
Thanks for your advice it seems this is the problem. As a test I ran the example of the shc connector here with --master yarn-cluster and --master yarn-client and this was the problem. The quorum are respectively found/not found in each test. So spark doest not have the file in its path when working as a client.
... View more
02-07-2017
11:56 AM
No this is absolutely not the problem. It does not guarantee in any manner that the spark job will take it into account. See answer to @anatva for a proper answer to this. Further more my post indicate that --files option is used with the correct files passed.
... View more
02-05-2017
01:46 PM
I am going to do it. I had production problem until now that kept me out of the problem. It is not close and I will try with your advice. Thanks.
... View more
02-02-2017
06:01 AM
I don't see no script element in these files. What do you mean ?
... View more
02-01-2017
11:22 AM
My zookeeper is running green on the ambari. I am able to hbase shell from the node where I launch the spark-shell. No problem. Here it is: hbase-site.xml,
hive-site.xml
... View more
02-01-2017
11:16 AM
I thought that's what I did 😉 What is the purpose of adding the files to spark-shell by --files option if it is not to add it to the spark classpath.
You said: "can you add the hbase-site.xml, hive-site.xml to SPARK_CLASSPATH and retry ?"
How do you do this ? Note: please see the next post for hive-site.xml and hbase-site.xml
Many thanks for your answer.
... View more
01-31-2017
12:08 PM
1 Kudo
spark-shell.txt Hello, I am trying to execute a basic code using the shc connector. It is a connector apparently provide by Hortonworks (in their github at least) that conveniently allows to insert/request data on HBase. So the code rework from the example of the project is building a Dataframe of fake data and try to insert it via the connector. This is the hbase configuration: screenshot-from-2017-01-31-13-47-21.png The code is run under a spark shell which launching command line is the following: spark-shell --master yarn \
--deploy-mode client \
--name "hive2hbase" \
--repositories "http://repo.hortonworks.com/content/groups/public/" \
--packages "com.hortonworks:shc-core:1.0.1-1.6-s_2.10" \
--files "/usr/hdp/current/hbase-client/conf/hbase-site.xml,/usr/hdp/current/hive-client/conf/hive-site.xml" \
--jars /usr/hdp/current/phoenix-client/phoenix-server.jar
--driver-memory 1G \
--executor-memory 1500m \
--num-executors 8 The log of spark shell tells me that it correctly load the hbase-site.xml and hive-site.xml files. I also checked that the configuration of the zookeeper quorum is correct in the HBase configuration. However the zookeeper objects are failing to connect because they are trying quorum:localhost:2081 instead of the addresses of the one of the three zookeeper nodes. As a consequence it also fails to give me the HBase connection that is needed. Note: I already tried to erase from the zookeeper command line the configuration relative to hbase (/hbase-unsecure) and restart zookeeper so as to let him rebuild it but this solution fails also. Thanks for any help that may be provided
... View more
Labels:
12-29-2016
07:47 AM
It solves the problem for a cluster with HBase Master = 1 and Region Server = 16 @Sumesh However hbase.regionserver.executor.openregion.threads = 20 didn't solve the problem. So I read the ambari issue associated and it is advised to take 200 NOT 20 Thanks for the help
... View more
12-28-2016
12:30 PM
No specific activity, I am restarting the HBase services on ambari simply. I am sorry I don't know what is NN health. I don't think the timeout is caused by slow start but by : FATAL [cluster1-node4:16000.activeMasterManager] master.HMaster: Failed to become active masterjava.io.IOException: Timedout 300000ms waiting for namespace table to be assigned at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104) at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1061) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:840) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863) at java.lang.Thread.run(Thread.java:745) Your advice : increasing 2400000ms => 40 minutes ??? I am not really ready to let him try 40 minutes before giving before it reports again about its impossibility "for namespace to be assigned"
... View more
12-28-2016
11:48 AM
1 Kudo
Hello, I have a cluster with HDP 2.5 installed. Hbase master start and shutdown after 5 min (300000ms) with a stack that seems different than the other stacks encountered in the other messages of this forum. That's why I decided to post this: Here are the errors that fill the master logs: 2016-12-28 12:21:57,531 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=854.48 KB, freeSize=811.72 MB, max=812.55 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0
2016-12-28 12:22:05,269 FATAL [cluster1-node4:16000.activeMasterManager] master.HMaster: Failed to become active master
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1061)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:840)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
at java.lang.Thread.run(Thread.java:745)
2016-12-28 12:22:05,271 FATAL [cluster1-node4:16000.activeMasterManager] master.HMaster: Master server abort: loaded coprocessors are: [org.apache.hadoop.hbase.backup.master.BackupController]
2016-12-28 12:22:05,271 FATAL [cluster1-node4:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Timedout 300000ms waiting for namespace table to be assigned
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:104)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1061)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:840)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
at java.lang.Thread.run(Thread.java:745)
2016-12-28 12:22:05,271 INFO [cluster1-node4:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown. Hbase master log is here: hbase-master.txt Thanks for any help that may be provided
... View more
10-06-2016
05:54 AM
Hi Sagar, This is the log you ask for: log.tar.gz The problem seems to be that the HDFS is remaining in safemode when restarting. I could not succeed stopping the container the normal way i.e $ sudo docker stop/kill sandbox so I do a: $ sudo systemctl stop docker
$ sudo systemctl start docker This is maybe the reason. But how a I suppose to stop the container ?
... View more
10-05-2016
04:20 PM
Thanks for your help Hi I went to the log of HDFS and suppress them in order to have log from the start and not many megas of logs. But when I stop docker and restarted the container the problem has disappeared... I close this for the moment as for now I could not reproduce it again.
... View more
10-05-2016
07:45 AM
I have exactly the same problem currently when trying to run the most basic example of the shc hbase connector. I use HDP 2.5. See the issue I opened: https://github.com/hortonworks-spark/shc/issues/46
And here is the stack: 16/10/05 07:21:42 INFO ClientCnxn: Opening socket connection to server sandbox.hortonworks.com/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
16/10/05 07:21:42 INFO ClientCnxn: Socket connection established to sandbox.hortonworks.com/10.0.2.15:2181, initiating session
16/10/05 07:21:42 INFO ClientCnxn: Session establishment complete on server sandbox.hortonworks.com/10.0.2.15:2181, sessionid = 0x157937befc00005, negotiated timeout = 40000
Exception in thread "main" org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:229)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:202)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableAvailable(ConnectionManager.java:985)
at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1399)
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.createTable(HBaseRelation.scala:87)
at org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:58)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at org.apache.spark.sql.execution.datasources.hbase.examples.HBaseSource$.main(HBaseSource.scala:90)
at org.apache.spark.sql.execution.datasources.hbase.examples.HBaseSource.main(HBaseSource.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:596)
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:580)
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:559)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1185)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1152)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:151)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
... 24 more
... View more
10-02-2016
03:06 PM
Hello all, I am using HDP-2.5 on Sandbox starting it with docker. I keep receiving strange stack when starting the application. I am unable to know if it is important for the moment... sam@sam-dell:~$ sudo ./dev/sandbox-docker/sandbox-start.sh
[sudo] password for sam:
Waiting for docker daemon to start up:
a903e2dc7993 sandbox "/usr/sbin/sshd -D" 8 days ago Exited (1) 44 hours ago 0.0.0.0:1000->1000/tcp, 0.0.0.0:1100->1100/tcp, 0.0.0.0:1220->1220/tcp, 0.0.0.0:1988->1988/tcp, 0.0.0.0:2100->2100/tcp, 0.0.0.0:4040->4040/tcp, 0.0.0.0:4200->4200/tcp, 0.0.0.0:5007->5007/tcp, 0.0.0.0:5011->5011/tcp, 0.0.0.0:6001->6001/tcp, 0.0.0.0:6003->6003/tcp, 0.0.0.0:6008->6008/tcp, 0.0.0.0:6080->6080/tcp, 0.0.0.0:6188->6188/tcp, 0.0.0.0:8000->8000/tcp, 0.0.0.0:8005->8005/tcp, 0.0.0.0:8020->8020/tcp, 0.0.0.0:8040->8040/tcp, 0.0.0.0:8042->8042/tcp, 0.0.0.0:8050->8050/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8082->8082/tcp, 0.0.0.0:8086->8086/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:8090-8091->8090-8091/tcp, 0.0.0.0:8188->8188/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:8744->8744/tcp, 0.0.0.0:8765->8765/tcp, 0.0.0.0:8886->8886/tcp, 0.0.0.0:8888-8889->8888-8889/tcp, 0.0.0.0:8983->8983/tcp, 0.0.0.0:8993->8993/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:9995-9996->9995-9996/tcp, 0.0.0.0:10000-10001->10000-10001/tcp, 0.0.0.0:10500->10500/tcp, 0.0.0.0:11000->11000/tcp, 0.0.0.0:15000->15000/tcp, 0.0.0.0:16010->16010/tcp, 0.0.0.0:16030->16030/tcp, 0.0.0.0:18080->18080/tcp, 0.0.0.0:19888->19888/tcp, 0.0.0.0:21000->21000/tcp, 0.0.0.0:42111->42111/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:50075->50075/tcp, 0.0.0.0:50095->50095/tcp, 0.0.0.0:50111->50111/tcp, 0.0.0.0:60000->60000/tcp, 0.0.0.0:60080->60080/tcp, 0.0.0.0:2222->22/tcp sandbox
sandbox
Starting Flume [ OK ]
Starting Postgre SQL [ OK ]
Starting mysql [ OK ]
Starting Ranger-admin [ OK ]
Starting data node [ OK ]
Starting name node [ OK ]
Starting Ranger-usersync [ OK ]
Starting Zookeeper nodes [ OK ]
16/10/02 06:20:05 WARN ipc.Client: Failed to connect to server: sandbox.hortonworks.com/172.17.0.2:8020: try once and fail.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:711)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2657)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1340)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1324)
at org.apache.hadoop.hdfs.tools.DFSAdmin.setSafeMode(DFSAdmin.java:611)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1916)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2107)
16/10/02 06:20:05 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.setSafeMode over null. Not retrying because try once and fail.
java.net.ConnectException: Call From sandbox.hortonworks.com/172.17.0.2 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:711)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2657)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1340)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1324)
at org.apache.hadoop.hdfs.tools.DFSAdmin.setSafeMode(DFSAdmin.java:611)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1916)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2107)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
... 20 more
safemode: Call From sandbox.hortonworks.com/172.17.0.2 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
make: [datanode] Error 255 (ignored)
Starting NFS portmap [ OK ]
Starting Hdfs nfs [ OK ]
Starting Hive server [ OK ]
Starting Hiveserver2 [ OK ]
Starting Oozie [ OK ]
Starting Yarn history server [ OK ]
Starting Node manager [ OK ]
Starting Webhcat server [ OK ]
Starting Spark [ OK ]
Starting Resource manager [ OK ]
16/10/02 06:20:44 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.setSafeMode over null. Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): NameNode still not started
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2057)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setSafeMode(NameNodeRpcServer.java:1172)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setSafeMode(ClientNamenodeProtocolServerSideTranslatorPB.java:747)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:711)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2657)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1340)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1324)
at org.apache.hadoop.hdfs.tools.DFSAdmin.setSafeMode(DFSAdmin.java:611)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1916)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2107)
safemode: NameNode still not started
Starting Zeppelin [ OK ]
TStarting Ambari server [ OK ]
Starting Ambari agent [WARNINGS]
tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified
Starting Mapred history server [ OK ] It seems related to a non started Namenode but at the beginning of the service starting, the name node is reported as started ok. I really don't understand this. Thanks for giving me any help
... View more
09-27-2016
07:18 AM
Problem solved by: restarting Ambari Infra before Ranger-Admin.
@Ayub Pathan It is exactly the same command that do not work for me (see my preceding post) except that you obtain the container name by another way. $ sudo docker images may give you the name of the container plus id and other informations for instance. For precise definition of all the command see for instance (the stop command): https://docs.docker.com/engine/reference/commandline/stop/#stop My problem is that using docker stop/kill takes infinitely many time or does not end. It is another problem that the starting one so I will open another post not to confuse users maybe on docker forum. Thanks for helping me until now.
... View more
09-27-2016
06:04 AM
Hello thanks for answering, @Ayub Pathan Indeed I solves the problem of starting Ranger-Admin. The problem as reduce to how to stop correctly docker in order not to confuse him. I recall that: $ sudo docker stop sandbox $ sudo docker kill sandbox do not have any effect on the container (at least after a half an hour ). Many thanks until now, I can eventually test my development.
... View more
09-26-2016
05:24 PM
Hello all, I am a relatively new user to Hadoop and try to model a stream implying Kafka => Spark Streaming => Hbase. I am coded it with Scala (2.10.6) and some part are in Java 8. For that code to be executed I needed to upgrade the Sandbox Ambari from Java 7 to Java 8. I used the documented procedure: [root@sandbox ~]# ambari-server setup
Using python /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)? n
Adjusting ambari-server permissions and ownership...
Checking firewall status...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)? y
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[3] Custom JDK
==============================================================================
Enter choice (1): 3
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /opt/jdk1.8.0_102
Validating JDK on Ambari Server...done.
Completing setup...
[Configuring database...
Enter advanced database configuration [y/n] (n)? n
input not recognized, please try again:
Enter advanced database configuration [y/n] (n)? n
Configuring database...
Default properties detected. Using built-in database.
Configuring ambari database...
Checking PostgreSQL...
Configuring local database...
Connecting to local database...done.
Configuring PostgreSQL...
Backup for pg_hba found, reconfiguration not required
Extracting system views...
............
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully. The documentation advise to shutdown every service after that and relaunch them one after another. So under the ambari interface I stop-all and start-all the services. Now come the problem, Ranger-Admin does not want to start. The reason, if I understand the log is a failure to start the Solr Cloud server it uses for logging reason (from the internet...). I could not figure out why Solr is suddenly incapable of starting this server. I add that stopping the Docker container abruptly (I could not find in the Horton Documentation a precise procedure to stop the docker container and some simple sudo docker stop sandbox or sudo docker kill sandbox seem to require infinite waiting time ..) does not solve the problem as it restart (with HDFS in Safe mode) without taking the change into account. Does anybody have an idea about how to solve this problem ? Below the stack I get from the ambari starting operation interface: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 208, in <module>
RangerAdmin().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 100, in start
setup_ranger_audit_solr()
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 590, in setup_ranger_audit_solr
jaas_file = params.solr_jaas_file)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 116, in create_collection
Execute(create_collection_cmd)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=sandbox.hortonworks.com
Client environment:java.version=1.8.0_102
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/opt/jdk1.8.0_102/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.4.0.0.1225.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=4.4.0-38-generic
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@6996db8
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x157676b6dcf0004, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@3800b686 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Using default ZkCredentialsProvider
Initiating client connection, connectString=sandbox.hortonworks.com:2181/infra-solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@71f2a7d5
Opening socket connection to server sandbox.hortonworks.com/172.17.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.17.0.2:2181, initiating session
Waiting for client to connect to ZooKeeper
Session establishment complete on server sandbox.hortonworks.com/172.17.0.2:2181, sessionid = 0x157676b6dcf0005, negotiated timeout = 10000
Watcher org.apache.solr.common.cloud.ConnectionManager@481a438a name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Updating cluster state from ZooKeeper...
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 1)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 2)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 3)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 4)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
No live SolrServers available to handle this request
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Command failed, tries again (tries: 5)
usage:
./solrCloudCli.sh --create-collection -z host1:2181,host2:2181/ambari-solr -c collection -cs conf_set
./solrCloudCli.sh --upload-config -z host1:2181,host2:2181/ambari-solr -d /tmp/myconfig_dir -cs config_set
./solrCloudCli.sh --download-config -z host1:2181,host2:2181/ambari-solr -cs config_set -d /tmp/myonfig_dir
./solrCloudCli.sh --check-config -z host1:2181,host2:2181/ambari-solr -cs config_set
./solrCloudCli.sh --create-shard -z host1:2181,host2:2181/ambari-solr -c collection -sn myshard
./solrCloudCli.sh --create-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --check-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --cluster-prop -z host1:2181,host2:2181/ambari-solr -cpn urlScheme -cpn http
./solrCloudCli.sh --create-sasl-users -z host1:2181,host2:2181 -zn /ambari-solr -csu logsearch,atlas,ranger
./solrCloudCli.sh --setup-kerberos -z host1:2181,host2:2181 --secure -zn /ambari-solr-secure -cfz /ambari-solr-unsecure -jf /etc/path/my_jaas.conf
./solrCloudCli.sh --setup-kerberos-plugin -z host1:2181,host2:2181 -zn /ambari-solr
-c,--collection <collection name> Collection name
-cc,--create-collection Create collection in Solr (command)
-cfz,--copy-from-znode </ambari-solr-secure> Copy-from-znode
-chc,--check-config Check configuration exists in Zookeeper (command)
-chz,--check-znode Check znode exists in Zookeeper (command)
-cp,--cluster-prop Set cluster property (command)
-cpn,--property-name <cluster prop name> Cluster property name
-cpv,--property-value <cluster prop value> Cluster property value
-cs,--config-set <config_set> Configuration set
-csh,--create-shard Create shard in Solr (command)
-csu,--create-sasl-users Create sasl users
-cz,--create-znode Create Znode (command)
-d,--config-dir <config_dir> Configuration directory
-dc,--download-config Download configuration set from Zookeeper (command)
-h,--help Print commands
-i,--interval <interval> Interval for retry logic in sec [default:5]
-jf,--jaas-file <jaas_file> Location of the jaas-file to communicate with kerberized Solr
-ksl,--key-store-location <key store location> Location of the key store used to communicate with Solr using SSL
-ksp,--key-store-password <key store password> Key store password used to communicate with Solr using SSL
-kst,--key-store-type <key store type> Type of the key store used to communicate with Solr using SSL
-m,--max-shards <max number of shards> Max number of shards per node (default: replication * shards)
-ns,--no-sharding Sharding not used when creating collection
-r,--replication <replication factor> Replication factor
-rf,--router-field <router_field> Router field for collection [default:_router_field_]
-rn,--router-name <router_name> Router name for collection [default:implicit]
-rt,--retry <number of retries> Number of retries for access Solr [default:10]
-s,--shards <shard number> Number of shards
-sec,--secure Flag for enable/disable kerberos (with --setup-kerberos or --setup-kerberos-plugin)
-sk,--setup-kerberos Setup kerberos (command)
-skp,--setup-kerberos-plugin Setup kerberos plugin in security.json (command)
-sn,--shard-name <my_new_shard> Name of the shard for create-shard command
-su,--sasl-users <atlas,ranger,logsearch-solr> Sasl users (comma separated list)
-tsl,--trust-store-location <trust store location> Location of the trust store used to communicate with Solr using SSL
-tsp,--trust-store-password <trust store password> Trust store password used to communicate with Solr using SSL
-tst,--trust-store-type <trust store type> Type of the trust store used to communicate with Solr using SSL
-uc,--upload-config Upload configuration set to Zookeeper (command)
-z,--zookeeper-connect-string <host:port,host:port[/ambari-solr]> Zookeeper quorum [and Znode (optional)]
-zn,--znode </ambari-solr> Zookeeper ZNode
Maximum retries exceeded: 5
Maximum retries exceeded: 5
Return code: 1
stdout:
2016-09-26 16:54:38,163 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-09-26 16:54:38,163 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-09-26 16:54:38,164 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-09-26 16:54:38,184 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-09-26 16:54:38,184 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-09-26 16:54:38,206 - checked_call returned (0, '')
2016-09-26 16:54:38,206 - Ensuring that hadoop has the correct symlink structure
2016-09-26 16:54:38,206 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-26 16:54:38,299 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-09-26 16:54:38,299 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-09-26 16:54:38,300 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-09-26 16:54:38,322 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-09-26 16:54:38,323 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-09-26 16:54:38,344 - checked_call returned (0, '')
2016-09-26 16:54:38,345 - Ensuring that hadoop has the correct symlink structure
2016-09-26 16:54:38,345 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-09-26 16:54:38,346 - Group['livy'] {}
2016-09-26 16:54:38,347 - Group['spark'] {}
2016-09-26 16:54:38,347 - Group['ranger'] {}
2016-09-26 16:54:38,347 - Group['zeppelin'] {}
2016-09-26 16:54:38,347 - Group['hadoop'] {}
2016-09-26 16:54:38,347 - Group['users'] {}
2016-09-26 16:54:38,348 - Group['knox'] {}
2016-09-26 16:54:38,348 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,349 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,349 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,350 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,350 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,351 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,351 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,352 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,352 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-09-26 16:54:38,353 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,353 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,354 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,354 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,355 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-09-26 16:54:38,355 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,356 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,356 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,357 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,357 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,358 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,358 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,359 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,359 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-09-26 16:54:38,360 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-26 16:54:38,361 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-09-26 16:54:38,369 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-09-26 16:54:38,369 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-09-26 16:54:38,370 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-09-26 16:54:38,371 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-09-26 16:54:38,378 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-09-26 16:54:38,379 - Group['hdfs'] {}
2016-09-26 16:54:38,379 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-09-26 16:54:38,379 - FS Type:
2016-09-26 16:54:38,380 - Directory['/etc/hadoop'] {'mode': 0755}
2016-09-26 16:54:38,393 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,393 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-09-26 16:54:38,405 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-09-26 16:54:38,414 - Skipping Execute[('setenforce', '0')] due to not_if
2016-09-26 16:54:38,414 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-09-26 16:54:38,416 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-09-26 16:54:38,416 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-09-26 16:54:38,421 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-09-26 16:54:38,423 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-09-26 16:54:38,423 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-09-26 16:54:38,433 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,434 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-09-26 16:54:38,434 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-09-26 16:54:38,438 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-09-26 16:54:38,445 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-09-26 16:54:38,586 - Stack Feature Version Info: stack_version=2.5, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-09-26 16:54:38,590 - Directory['/usr/hdp/current/ranger-admin/conf'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True}
2016-09-26 16:54:38,592 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar'), 'mode': 0644}
2016-09-26 16:54:38,592 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources//mysql-connector-java.jar, because /var/lib/ambari-agent/tmp/mysql-connector-java.jar already exists
2016-09-26 16:54:38,607 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2016-09-26 16:54:38,649 - File['/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'] {'mode': 0644}
2016-09-26 16:54:38,652 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...}
2016-09-26 16:54:38,671 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-09-26 16:54:38,702 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-09-26 16:54:38,703 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-09-26 16:54:38,704 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': '/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'}}
2016-09-26 16:54:38,704 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2016-09-26 16:54:38,706 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2016-09-26 16:54:38,706 - Writing File['/usr/hdp/current/ranger-admin/install.properties'] because contents don't match
2016-09-26 16:54:38,706 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-09-26 16:54:38,707 - Not downloading the file from http://sandbox.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-09-26 16:54:38,707 - Execute['/opt/jdk1.8.0_102//bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/ews/lib/* org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://localhost:3306/ranger' rangeradmin [PROTECTED] com.mysql.jdbc.Driver'] {'environment': {}, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 5, 'try_sleep': 10}
2016-09-26 16:54:39,030 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] {'not_if': 'ls /usr/hdp/current/ranger-admin/conf', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf'}
2016-09-26 16:54:39,037 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/classes/conf', '/usr/hdp/current/ranger-admin/conf')] due to not_if
2016-09-26 16:54:39,037 - Directory['/usr/hdp/current/ranger-admin/'] {'owner': 'ranger', 'group': 'ranger', 'recursive_ownership': True}
2016-09-26 16:54:39,171 - Directory['/var/run/ranger'] {'owner': 'ranger', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:39,173 - Directory['/var/log/ranger/admin'] {'owner': 'ranger', 'group': 'ranger', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:39,175 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-env-logdir.sh'] {'owner': 'ranger', 'content': 'export RANGER_ADMIN_LOG_DIR=/var/log/ranger/admin', 'group': 'ranger', 'mode': 0755}
2016-09-26 16:54:39,175 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-default-site.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-09-26 16:54:39,176 - File['/usr/hdp/current/ranger-admin/conf/security-applicationContext.xml'] {'owner': 'ranger', 'group': 'ranger'}
2016-09-26 16:54:39,177 - Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] {'not_if': 'ls /usr/bin/ranger-admin', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh'}
2016-09-26 16:54:39,187 - Skipping Execute[('ln', '-sf', '/usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh', '/usr/bin/ranger-admin')] due to not_if
2016-09-26 16:54:39,188 - XmlConfig['ranger-admin-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'ranger', 'configurations': ...}
2016-09-26 16:54:39,196 - Generating config: /usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml
2016-09-26 16:54:39,196 - File['/usr/hdp/current/ranger-admin/conf/ranger-admin-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-26 16:54:39,259 - Directory['/usr/hdp/current/ranger-admin/conf/ranger_jaas'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0700}
2016-09-26 16:54:39,259 - File['/usr/hdp/current/ranger-admin/ews/webapp/WEB-INF/log4j.properties'] {'content': ..., 'owner': 'ranger', 'group': 'ranger', 'mode': 0644}
2016-09-26 16:54:39,260 - Execute[('/opt/jdk1.8.0_102//bin/java', '-cp', '/usr/hdp/current/ranger-admin/cred/lib/*', 'org.apache.ranger.credentialapi.buildks', 'create', 'rangeradmin', '-value', [PROTECTED], '-provider', 'jceks://file/etc/ranger/admin/rangeradmin.jceks')] {'logoutput': True, 'environment': {'JAVA_HOME': '/opt/jdk1.8.0_102/'}, 'sudo': True}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Sep 26, 2016 4:54:40 PM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Alias already exist!! will try to delete first.
FOUND value of [interactive] field in the Class [org.apache.hadoop.security.alias.CredentialShell] = [true]
Deleting credential: rangeradmin from CredentialProvider: jceks://file/etc/ranger/admin/rangeradmin.jceks
rangeradmin has been successfully deleted.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
rangeradmin has been successfully created.
org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated.
2016-09-26 16:54:40,535 - File['/etc/ranger/admin/rangeradmin.jceks'] {'owner': 'ranger', 'group': 'ranger', 'mode': 0640}
2016-09-26 16:54:40,535 - XmlConfig['core-site.xml'] {'group': 'ranger', 'conf_dir': '/usr/hdp/current/ranger-admin/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}, 'fs.defaultFS': {'final': 'true'}}, 'owner': 'ranger', 'configurations': ...}
2016-09-26 16:54:40,543 - Generating config: /usr/hdp/current/ranger-admin/conf/core-site.xml
2016-09-26 16:54:40,543 - File['/usr/hdp/current/ranger-admin/conf/core-site.xml'] {'owner': 'ranger', 'content': InlineTemplate(...), 'group': 'ranger', 'mode': 0644, 'encoding': 'UTF-8'}
2016-09-26 16:54:40,563 - Directory['/var/log/ambari-infra-solr-client'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:40,629 - Directory['/usr/lib/ambari-infra-solr-client'] {'recursive_ownership': True, 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-09-26 16:54:40,631 - File['/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'] {'content': StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh'), 'mode': 0755}
2016-09-26 16:54:40,691 - File['/usr/lib/ambari-infra-solr-client/log4j.properties'] {'content': InlineTemplate(...), 'mode': 0644}
2016-09-26 16:54:40,714 - File['/var/log/ambari-infra-solr-client/solr-client.log'] {'content': '', 'mode': 0664}
2016-09-26 16:54:40,763 - Writing File['/var/log/ambari-infra-solr-client/solr-client.log'] because contents don't match
2016-09-26 16:54:40,766 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --check-znode --retry 5 --interval 10'] {}
2016-09-26 16:54:42,118 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --download-config --config-dir /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667 --config-set ranger_audits --retry 30 --interval 5'] {'only_if': 'ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --check-config --config-set ranger_audits --retry 30 --interval 5'}
2016-09-26 16:54:43,492 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] {'not_if': 'test -d /var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'}
2016-09-26 16:54:43,521 - Skipping Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --upload-config --config-dir /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf --config-set ranger_audits --retry 30 --interval 5'] due to not_if
2016-09-26 16:54:43,522 - Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'] {'action': ['delete'], 'create_parents': True}
2016-09-26 16:54:43,523 - Removing directory Directory['/var/lib/ambari-agent/tmp/solr_config_ranger_audits_0.573793698667'] and all its content
2016-09-26 16:54:43,526 - Execute['ambari-sudo.sh JAVA_HOME=/opt/jdk1.8.0_102/ /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding'] {}
Command failed after 1 tries
Many thanks to those that can provide any help in the matter.
... View more
Labels: