Member since
01-09-2019
7
Posts
1
Kudos Received
0
Solutions
03-12-2019
10:38 AM
About this answer: this isn't an answer, that's why we want HA and the cluster name. Is there no other way?
... View more
02-26-2019
10:18 AM
I do have hbase client. I've traced this to an issue with the move process. It turns out that the kerberos principals stayed pointing to the old host. Funny enough, I just clicked set recommended, restarted all services as set to require a restart and everything came back online
... View more
02-26-2019
10:18 AM
I do have hbase client. I've traced this to an issue with the move process. It turns out that the kerberos principals stayed pointing to the old host. Funny enough, I just clicked set recommended, restarted all services as set to require a restart and everything came back online
... View more
02-12-2019
01:07 PM
Logs from Infra Solr now constantly show errors as below: 2019-02-12T10:20:33,548 [zkCallback-7-thread-4] WARN [c:hadoop_logs s:shard5 r:core_node19 x:hadoop_logs_shard5_replica_n16] org.apache.solr.update.PeerSync (PeerSync.java:489) - PeerSync: core=hadoop_logs_shard5_replica_n16 url=http://ip-10-241-10-96:8886/solr exception talking to http://ip-10-241-10-72:8886/solr/hadoop_logs_shard5_replica_n18/, failed
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://ip-10-241-10-72:8886/solr/hadoop_logs_shard5_replica_n18: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - RC4 with HMAC)</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /solr/hadoop_logs_shard5_replica_n18/get. Reason:
<pre> GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - RC4 with HMAC)</pre></p>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607) ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:14]
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:14]
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:14]
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:14]
at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172) ~[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:13]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112]
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) ~[metrics-core-3.2.6.jar:3.2.6]
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 2018-06-18 16:55:14]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
... View more
02-11-2019
02:31 PM
Hello I'm having the same issue. I'm running 3.0.1, kerberised, integrated with AD (everything seems to work just fine) logsearch-portal has no errors, but logfeeder is logging this constantly: 2019-02-11 07:36:40,972 [OutputSolr,hadoop_logs,worker=0] WARN apache.solr.client.solrj.impl.CloudSolrClient (CloudSolrClient.java:992) - Re-trying request to collection(s) [hadoop_logs] after stale state error from server.
2019-02-11 07:36:40,973 [OutputSolr,hadoop_logs,worker=0] ERROR apache.solr.client.solrj.impl.CloudSolrClient (CloudSolrClient.java:921) - Request to collection [hadoop_logs] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the re
quest., retry? 1
Cluster is otherwise fully operational
... View more
02-07-2019
08:51 PM
I have the same issue. I do have hbase client installed 2019-02-07 16:57:28,593 INFO [ReadOnlyZKClient-ip-10-241-10-72.eu-west-1.compute.internal:2181,ip-10-241-10-7.eu-west-1.compute.internal:2181,ip-10-241-10-96.eu-west-1.compute.internal:2181@0x7770f470] zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-241-10-72.eu-west-1.compute.
internal:2181,ip-10-241-10-7.eu-west-1.compute.internal:2181,ip-10-241-10-96.eu-west-1.compute.internal:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$13/122109374@59d29f3c
2019-02-07 16:57:28,619 INFO [ReadOnlyZKClient-ip-10-241-10-72.eu-west-1.compute.internal:2181,ip-10-241-10-7.eu-west-1.compute.internal:2181,ip-10-241-10-96.eu-west-1.compute.internal:2181@0x7770f470-SendThread(ip-10-241-10-72.eu-west-1.compute.internal:2181)] zookeeper.ClientCnxn: Opening socket co
nnection to server ip-10-241-10-72.eu-west-1.compute.internal/10.241.10.72:2181. Will not attempt to authenticate using SASL (unknown error)
2019-02-07 16:57:28,628 INFO [ReadOnlyZKClient-ip-10-241-10-72.eu-west-1.compute.internal:2181,ip-10-241-10-7.eu-west-1.compute.internal:2181,ip-10-241-10-96.eu-west-1.compute.internal:2181@0x7770f470-SendThread(ip-10-241-10-72.eu-west-1.compute.internal:2181)] zookeeper.ClientCnxn: Socket connection
established, initiating session, client: /10.241.10.72:34382, server: ip-10-241-10-72.eu-west-1.compute.internal/10.241.10.72:2181
2019-02-07 16:57:28,642 INFO [ReadOnlyZKClient-ip-10-241-10-72.eu-west-1.compute.internal:2181,ip-10-241-10-7.eu-west-1.compute.internal:2181,ip-10-241-10-96.eu-west-1.compute.internal:2181@0x7770f470-SendThread(ip-10-241-10-72.eu-west-1.compute.internal:2181)] zookeeper.ClientCnxn: Session establish
ment complete on server ip-10-241-10-72.eu-west-1.compute.internal/10.241.10.72:2181, sessionid = 0x268c8d1bb3600c1, negotiated timeout = 60000
2019-02-07 16:57:28,662 WARN [main] client.ConnectionImplementation: Retrieve cluster id failed
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/hbaseid
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:527)
at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
at org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.createAllTables(TimelineSchemaCreator.java:301)
at org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.createAllSchemas(TimelineSchemaCreator.java:277)
at org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.main(TimelineSchemaCreator.java:146)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
at java.lang.Thread.run(Thread.java:745)
2019-02-07 16:57:32,982 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=4124 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server, details=row 'prod.timelineservice
.entity' on table 'hbase:meta' at null
... View more
01-09-2019
01:36 PM
1 Kudo
Any updates on this? libhdfs.so.0.0.0 is missing on 3.0.1.0-187 as well
... View more