Member since
05-21-2021
33
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
776 | 06-23-2022 01:06 AM | |
1731 | 04-22-2022 02:24 AM | |
8612 | 03-29-2022 01:20 AM |
11-01-2021
08:21 AM
Hello @nthomas Sorry for the delayed reply. I set the LDAP user search filter and LDAP user search base to the Cloudera Manager > Settings. By setting these values I could block the users from showing the cluster information and the settings but I couldn’t completely block the users from logging in. The main intention was to block the users from logging in. Do you know how can I block the users completely?
... View more
10-21-2021
03:33 AM
Hello @smdas Thank you for your detailed reply. We looked into the ZooKeeper logs and couldn't find any issue there. After [2] for RangerAudits Shard1 Replica1 to kept showing the same error couple of times and then the Solr server stopped. We investigated this further and found that there was a long GC pause during that time due to which the application (Solr server) lost connection with the ZooKeeper and started throwing the error. We have increased the zkClientTimeout to 30 seconds and restarted the Solr service. We can see that a leader is elected for the collection. Version: CDP 7.1.6 Thanks
... View more
10-20-2021
04:18 AM
Dear team, We are facing the below issue on one of the Solr nodes. 2021-10-17 04:05:57.006 ERROR (qtp1916575798-2477) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
Cannot talk to ZooKeeper - Updates are disabled.
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.zkCheck(DistributedZkUpdateProcessor.java:1245)
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.setupRequest(DistributedZkUpdateProcessor.java:582)
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:239)
at org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:477)
at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118) However, after some time Solr server is able to reconnect. 2021-10-17 04:05:57.028 WARN (Thread-2414) [ ] o.a.z.Login TGT renewal thread has been interrupted and will exit.
2021-10-17 04:05:57.043 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.c.ConnectionManager zkClient has connected
2021-10-17 04:05:57.043 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
2021-10-17 04:05:57.043 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.ZkController ZooKeeper session re-connected ... refreshing core states after session expiration.
2021-10-17 04:05:57.047 WARN (qtp1916575798-2461) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.h.s.a.u.KerberosName auth_to_local rule mechanism not set.Using default of hadoop
2021-10-17 04:05:57.072 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (2)
2021-10-17 04:05:57.085 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.Overseer Overseer (id=72334547140450792-192.168.0.17:8985_solr-n_0000000153) closing
2021-10-17 04:05:57.085 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.Overseer Overseer (id=72334547140450792-192.168.0.17:8985_solr-n_0000000153) closing
2021-10-17 04:05:57.085 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.Overseer Overseer (id=72334547140450792-192.168.0.17:8985_solr-n_0000000153) closing
2021-10-17 04:05:57.087 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.Overseer Overseer (id=72334547140450792-192.168.0.17:8985_solr-n_0000000153) closing
2021-10-17 04:05:57.089 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.ZkController Publish node=192.168.0.17:8985_solr as DOWN
2021-10-17 04:05:57.093 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/192.168.0.17:8985_solr
2021-10-17 04:05:57.097 INFO (zkCallback-10-thread-28) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/ranger_audits/state.json] for collection [ranger_audits] has occurred - updating... (live nodes size: [2])
2021-10-17 04:05:57.098 INFO (coreZkRegister-1-thread-5) [ ] o.a.s.c.ZkController Registering core ranger_audits_shard1_replica_n1 afterExpiration? true
2021-10-17 04:05:57.099 INFO (coreZkRegister-1-thread-6) [ ] o.a.s.s.ZkIndexSchemaReader Creating ZooKeeper watch for the managed schema at /configs/ranger_audits/managed-schema
2021-10-17 04:05:57.099 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.c.DefaultConnectionStrategy Reconnected to ZooKeeper
2021-10-17 04:05:57.099 INFO (zkConnectionManagerCallback-11-thread-1-EventThread) [ ] o.a.s.c.c.ConnectionManager zkClient Connected:true
2021-10-17 04:05:57.102 INFO (zkCallback-10-thread-24) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
2021-10-17 04:05:57.102 INFO (Thread-2418) [ ] o.a.s.c.SolrCore config update listener called for core ranger_audits_shard1_replica_n1
2021-10-17 04:05:57.103 INFO (coreZkRegister-1-thread-6) [ ] o.a.s.s.ZkIndexSchemaReader Current schema version 0 is already the latest
2021-10-17 04:05:57.109 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/ranger_audits/leaders/shard1
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1/
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.SyncStrategy https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1/ has no replicas
2021-10-17 04:05:57.114 INFO (coreZkRegister-1-thread-5) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/ranger_audits/leaders/shard1/leader after winning as /collections/ranger_audits/leader_elect/shard1/election/216449719079380911-core_node2-n_0000000061 But these keep on repeating and after around 10 minutes, we see the below error and the Solr server finally gives up. 2021-10-17 04:14:25.112 ERROR (qtp1916575798-2487) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.u.p.DistributedZkUpdateProcessor ClusterState says we are
the leader, but locally we don't think so
2021-10-17 04:14:25.112 ERROR (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.u.p.DistributedZkUpdateProcessor ClusterState says we are
the leader, but locally we don't think so
2021-10-17 04:14:25.114 INFO (qtp1916575798-2487) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [ranger_audits_shard1_replic
a_n1] webapp=/solr path=/update params={wt=javabin&version=2}{} 0 36703
2021-10-17 04:14:25.114 WARN (qtp1916575798-2492) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.h.s.a.u.KerberosName auth_to_local rule mechanism not set.Us
ing default of hadoop
2021-10-17 04:14:25.116 INFO (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [ranger_audits_shard1_replic
a_n1] webapp=/solr path=/update params={wt=javabin&version=2}{} 0 36707
2021-10-17 04:14:37.503 WARN (Thread-2474) [ ] o.a.z.Login TGT renewal thread has been interrupted and will exit.
2021-10-17 04:14:37.504 ERROR (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ClusterState says we are the leader (https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1), but locally we don't think so. Request came from null
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDefensiveChecks(DistributedZkUpdateProcessor.java:1017)
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.setupRequest(DistributedZkUpdateProcessor.java:655)
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.setupRequest(DistributedZkUpdateProcessor.java:593)
at org.apache.solr.update.processor.DistributedZkUpdateProcessor.setupRequest(DistributedZkUpdateProcessor.java:585) Please help to resolve this issue. Thanks
... View more
Labels:
10-11-2021
09:27 AM
Hello Team, I have a requirement to apply specific filters for user login on Cloudera Manager. I came across a configuration setting that allows for including an external authentication script. But I am not clear on how the script should look like. https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/security-kerberos-authentication/topics/cm-security-external-authentication.html. Does anybody have an idea? Thanks
... View more
Labels:
09-16-2021
08:17 AM
@asish Sorry for the delayed reply. We only saw this error in the log, however, I looked into the logs again later today and couldn't find this - "under construction" in the logs anymore. Is there any workaround for this so that in the future this doesn't happen?
... View more
09-15-2021
05:04 AM
@balajip I set the config in the wrong service. After setting the config on Hive service it worked. Thank you.
... View more
09-15-2021
01:17 AM
Thank you @balajip. I tried the solution. Updated the config on Hive on Tez, but still, I am getting the issue. The full stack trace is given below. [HiveServer2-Handler-Pool: Thread-114015]: Error fetching results:
org.apache.hive.service.cli.HiveSQLException: java.io.IOException: java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: Task org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@4fcc96f rejected from java.util.concurrent.ThreadPoolExecutor@66bee5ac[Shutting down, pool size = 162, active threads = 0, queued tasks = 0, completed tasks = 5297]
at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:476) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:946) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:567) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:798) [hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:654) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Caused by: java.io.IOException: java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: Task org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@4fcc96f rejected from java.util.concurrent.ThreadPoolExecutor@66bee5ac[Shutting down, pool size = 162, active threads = 0, queued tasks = 0, completed tasks = 5297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:638) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:545) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:150) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:901) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:243) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:471) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
... 13 more
Caused by: java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: Task org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@4fcc96f rejected from java.util.concurrent.ThreadPoolExecutor@66bee5ac[Shutting down, pool size = 162, active threads = 0, queued tasks = 0, completed tasks = 5297]
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:211) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:133) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:219) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:140) ~[hive-hbase-handler-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:101) ~[hive-hbase-handler-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:605) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:545) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:150) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:901) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:243) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:471) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
... 13 more
Caused by: java.util.concurrent.RejectedExecutionException: Task org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@4fcc96f rejected from java.util.concurrent.ThreadPoolExecutor@66bee5ac[Shutting down, pool size = 162, active threads = 0, queued tasks = 0, completed tasks = 5297]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) ~[?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) ~[?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) ~[?:1.8.0_242]
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService.submit(ResultBoundedCompletionService.java:171) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:329) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:191) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595) ~[hbase-client-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:211) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:133) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:219) ~[hbase-mapreduce-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:140) ~[hive-hbase-handler-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:101) ~[hive-hbase-handler-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:605) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:545) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:150) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:901) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:243) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:471) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
... 13 more
... View more
09-14-2021
04:18 AM
On CDP we are using both the Hive (for the Hive Metastores) and Hive on Tez (for the HiveServers). We are getting the below error while trying to run a query based on a condition. I can't share the table information and the exact query but it looks something as below. CREATE EXTERNAL TABLE IF NOT EXISTS XXX (
`1` string,
`6` varchar(30),
`7` varchar(5),
`8` varchar(10)
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES('hbase.columns.mapping'=':key,
xx:1,
xx:5,
xx:6')
TBLPROPERTIES (
'hbase.table.name'='YYYYY'
); the query looks as follows: select * from XXX where 8 = '1990-10-10'; And we see the below error from the HiveServer [a3ed3b7b-d225-43af-9ac0-76917911a742 HiveServer2-Handler-Pool: Thread-128-EventThread]: Error while calling watcher
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1d7573cd rejected from java.util.concurrent.ThreadPoolExecutor@194ae4bb[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) ~[?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) ~[?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) ~[?:1.8.0_242]
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) ~[?:1.8.0_242]
at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) ~[?:1.8.0_242]
at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:541) ~[hbase-zookeeper-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40) ~[hbase-zookeeper-2.2.3.7.1.6.0-297.jar:2.2.3.7.1.6.0-297]
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) [zookeeper-3.5.5.7.1.6.0-297.jar:3.5.5.7.1.6.0-297]
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) [zookeeper-3.5.5.7.1.6.0-297.jar:3.5.5.7.1.6.0-297] We added the below config on the HiveServer (based on this: https://community.cloudera.com/t5/Support-Questions/HIVE-concurrency-request-erreur-when-run-several-same/td-p/319166) but still, we are getting the issue. <property>
<name>hive.server2.parallel.ops.in.session</name>
<value>true</value>
</property>
... View more
Labels:
09-07-2021
02:02 AM
Dear team, We are getting the below error in the CDP - 7.1.6 HiveServer logs. Can you please share what's the cause of this issue and any possible solution? 2021-09-03 13:12:25,571 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-16886]: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. java.io.IOException: Fail to get checks
um, since file /warehouse/tablespace/managed/hive/xxxxx/xxxxx/xxxxx/xxxxx/delta_0000003_0000003_0000/xxxxx.xxxxx is under construction.
2021-09-03 13:12:25,571 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-16886]: Completed executing command(queryId=hive_20210903131225_70117bf2-c60f-4564-83e9-8a60be421f63); Time taken: 0.12 seconds
2021-09-03 13:12:25,572 INFO org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-16886]: OK
2021-09-03 13:12:25,572 INFO org.apache.hadoop.hive.ql.lockmgr.DbTxnManager: [HiveServer2-Background-Pool: Thread-16886]: Stopped heartbeat for query: hive_20210903131225_70117bf2-c60f-4564-83e9-8a60be421f63
2021-09-03 13:12:25,578 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-16886]: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.exec.MoveTask. java.io.IOException: Fail to get checksum, since file /warehouse/tablespace/managed/hive/xxxxx/xxxxx/xxxxx/xxxxx/delta_0000003_0000003_0000/xxxxx.xxxxx is under construction.
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:362) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:241) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:322) [hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_242]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_242]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) [hadoop-common-3.1.1.7.1.6.0-297.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:340) [hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_242]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_242]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Fail to get checksum, since file /warehouse/tablespace/managed/hive/xxxxx/xxxxx/xxxxx/xxxxx/delta_0000003_0000003_0000/xxxxx.xxxxx is under construction.
at org.apache.hadoop.hive.ql.metadata.Hive.addWriteNotificationLog(Hive.java:3509) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:2245) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:515) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:432) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:742) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:497) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:491) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:225) ~[hive-service-3.1.3000.7.1.6.0-297.jar:3.1.3000.7.1.6.0-297]
... 13 more
... View more
Labels:
05-26-2021
04:21 AM
I edited my solution above a bit. We found that the issue was related to some kind of routing from Oozie WF to YARN logs. What we wanted was to view the logs from the Oozie WF manager. When we access the logs from the YARN RM UI it works, but we couldn't able to view the logs directly from the Oozie WF manager. We already have the correct configurations present in the MapReduce service.
... View more
- « Previous
- Next »