2019-10-22T13:06:53,617 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:06:56,641 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:06:59,665 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:02,686 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:05,706 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:08,727 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:11,749 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:14,769 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:17,790 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:20,815 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:23,837 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:26,858 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:29,881 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:32,903 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:33,332 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.stats.fetch.partition.stats does not exist 2019-10-22T13:07:33,332 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.heapsize does not exist 2019-10-22T13:07:33,349 INFO [HiveServer2-Handler-Pool: Thread-133]: thrift.ThriftCLIService (:()) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10 2019-10-22T13:07:33,503 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,534 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created local directory: /tmp/hive/b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,594 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/b14c105c-76d8-4564-bb29-e45f1bec1614/_tmp_space.db 2019-10-22T13:07:33,596 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://hdpserver11.puretec.purestorage.com:9083 2019-10-22T13:07:33,596 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 8 2019-10-22T13:07:33,597 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Connected to metastore. 2019-10-22T13:07:33,597 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=24 delay=5 lifetime=0 2019-10-22T13:07:33,608 INFO [HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is created: /tmp/hive/operation_logs/b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,608 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session opened, SessionHandle [b14c105c-76d8-4564-bb29-e45f1bec1614], current sessions:2 2019-10-22T13:07:33,687 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,687 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,687 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,688 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,707 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,707 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,707 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,707 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,713 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,713 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,713 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,713 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,743 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session closed, SessionHandle [b14c105c-76d8-4564-bb29-e45f1bec1614], current sessions:1 2019-10-22T13:07:33,743 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,743 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,744 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is deleted: /tmp/hive/operation_logs/b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,744 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: b14c105c-76d8-4564-bb29-e45f1bec1614 2019-10-22T13:07:33,744 INFO [b14c105c-76d8-4564-bb29-e45f1bec1614 HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:07:33,777 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/hive/b14c105c-76d8-4564-bb29-e45f1bec1614 on fs with scheme hdfs 2019-10-22T13:07:33,778 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/b14c105c-76d8-4564-bb29-e45f1bec1614 on fs with scheme file 2019-10-22T13:07:33,778 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Closed a connection to metastore, current connections: 7 2019-10-22T13:07:35,925 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:38,946 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:41,968 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:44,992 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:48,016 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:49,503 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.010 minutes, events=1, failedCount=1, totalEvents=49, totalFailedCount=49 2019-10-22T13:07:49,511 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:07:49,511 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:07:49,511 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:07:49,511 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.011 minutes; 50 during process lifetime 2019-10-22T13:07:49,511 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:07:49,512 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:07:51,040 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:54,063 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:07:57,086 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:00,106 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:03,130 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:06,153 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:09,175 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:12,198 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:15,223 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:18,249 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:21,272 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:24,293 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:27,315 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:30,337 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:33,002 INFO [Heartbeater-0]: lockmgr.DbTxnManager (:()) - Sending heartbeat for txnid:5296 and lockid:0 queryId=null txnid:0 2019-10-22T13:08:33,359 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:36,381 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:15,223 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:18,249 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:21,272 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:24,293 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:27,315 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:30,337 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:33,002 INFO [Heartbeater-0]: lockmgr.DbTxnManager (:()) - Sending heartbeat for txnid:5296 and lockid:0 queryId=null txnid:0 2019-10-22T13:08:33,359 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:36,381 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:39,403 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:42,427 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:45,449 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:48,472 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:49,513 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.010 minutes, events=1, failedCount=1, totalEvents=50, totalFailedCount=50 2019-10-22T13:08:49,519 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:08:49,519 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:08:49,519 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:08:49,519 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.008 minutes; 51 during process lifetime 2019-10-22T13:08:49,519 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:08:49,520 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:08:51,493 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:54,515 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:08:57,536 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:00,556 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:03,573 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:06,596 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:09,617 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:12,634 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:15,652 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:18,672 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:21,696 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:24,719 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:27,739 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:30,760 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:33,781 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:36,800 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:39,821 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:42,840 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:45,859 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:48,879 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:49,521 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.008 minutes, events=1, failedCount=1, totalEvents=51, totalFailedCount=51 2019-10-22T13:09:49,530 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:09:49,531 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:09:49,531 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:09:49,531 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.012 minutes; 52 during process lifetime 2019-10-22T13:09:49,531 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:09:49,532 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:09:51,898 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:54,917 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:09:57,938 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:00,959 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:03,980 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:07,000 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:10,020 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:13,040 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:16,061 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:19,081 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:22,105 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:25,128 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:28,148 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:31,172 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:33,290 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.stats.fetch.partition.stats does not exist 2019-10-22T13:10:33,290 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.heapsize does not exist 2019-10-22T13:10:33,306 INFO [HiveServer2-Handler-Pool: Thread-133]: thrift.ThriftCLIService (:()) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10 2019-10-22T13:10:33,364 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,387 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created local directory: /tmp/hive/8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,436 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/8a53be90-c1b3-4902-a46c-135fb5c31fce/_tmp_space.db 2019-10-22T13:10:33,437 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://hdpserver11.puretec.purestorage.com:9083 2019-10-22T13:10:33,438 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 8 2019-10-22T13:10:33,439 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Connected to metastore. 2019-10-22T13:10:33,439 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=24 delay=5 lifetime=0 2019-10-22T13:10:33,452 INFO [HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is created: /tmp/hive/operation_logs/8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,452 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session opened, SessionHandle [8a53be90-c1b3-4902-a46c-135fb5c31fce], current sessions:2 2019-10-22T13:10:33,571 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,571 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,571 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,571 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,596 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,596 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,596 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,596 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,602 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,602 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,603 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,603 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,639 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session closed, SessionHandle [8a53be90-c1b3-4902-a46c-135fb5c31fce], current sessions:1 2019-10-22T13:10:33,639 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,639 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,640 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is deleted: /tmp/hive/operation_logs/8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,640 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 8a53be90-c1b3-4902-a46c-135fb5c31fce 2019-10-22T13:10:33,640 INFO [8a53be90-c1b3-4902-a46c-135fb5c31fce HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:10:33,642 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/hive/8a53be90-c1b3-4902-a46c-135fb5c31fce on fs with scheme hdfs 2019-10-22T13:10:33,642 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/8a53be90-c1b3-4902-a46c-135fb5c31fce on fs with scheme file 2019-10-22T13:10:33,642 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Closed a connection to metastore, current connections: 7 2019-10-22T13:10:34,193 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:34,701 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+0,-2)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:10:35,522 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Status: Failed 2019-10-22T13:10:35,523 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Vertex failed, vertexName=Map 1, vertexId=vertex_1571760131080_0019_1_00, diagnostics=[Task failed, taskId=task_1571760131080_0019_1_00_000380, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:777) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:313) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:95) at org.apache.orc.impl.WriterImpl.(WriterImpl.java:177) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.(WriterImpl.java:94) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.initWriter(OrcRecordUpdater.java:611) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSimpleEvent(OrcRecordUpdater.java:424) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSplitUpdateEvent(OrcRecordUpdater.java:433) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.insert(OrcRecordUpdater.java:485) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:999) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) , errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:777) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:313) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:95) at org.apache.orc.impl.WriterImpl.(WriterImpl.java:177) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.(WriterImpl.java:94) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.initWriter(OrcRecordUpdater.java:611) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSimpleEvent(OrcRecordUpdater.java:424) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSplitUpdateEvent(OrcRecordUpdater.java:433) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.insert(OrcRecordUpdater.java:485) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:999) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ]], Task failed, taskId=task_1571760131080_0019_1_00_000044, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) ... 19 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1045) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ... 20 more Caused by: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hdfs.ExceptionLastSeen.set(ExceptionLastSeen.java:45) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:829) Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DataStreamer.initDataStreaming(DataStreamer.java:633) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717) , errorMessage=Cannot recover from this error:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) ... 19 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1045) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ... 20 more Caused by: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hdfs.ExceptionLastSeen.set(ExceptionLastSeen.java:45) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:829) Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DataStreamer.initDataStreaming(DataStreamer.java:633) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717) ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:2 killedTasks:753, Vertex vertex_1571760131080_0019_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE] 2019-10-22T13:10:35,523 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Vertex killed, vertexName=Reducer 2, vertexId=vertex_1571760131080_0019_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1571760131080_0019_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE] 2019-10-22T13:10:35,523 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Vertex killed, vertexName=Reducer 4, vertexId=vertex_1571760131080_0019_1_03, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:162, Vertex vertex_1571760131080_0019_1_03 [Reducer 4] killed/failed due to:OTHER_VERTEX_FAILURE] 2019-10-22T13:10:35,523 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Vertex killed, vertexName=Reducer 3, vertexId=vertex_1571760131080_0019_1_02, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:162, Vertex vertex_1571760131080_0019_1_02 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE] 2019-10-22T13:10:35,523 ERROR [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:3 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - org.apache.tez.common.counters.DAGCounter: 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - NUM_FAILED_TASKS: 3 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - NUM_KILLED_TASKS: 752 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - TOTAL_LAUNCHED_TASKS: 380 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - AM_CPU_MILLISECONDS: 255250 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - AM_GC_TIME_MILLIS: 343 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters: 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - GROUPED_INPUT_SPLITS_Map_1: 755 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - INPUT_DIRECTORIES_Map_1: 1 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - INPUT_FILES_Map_1: 503 2019-10-22T13:10:35,531 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - RAW_INPUT_SPLITS_Map_1: 1509 2019-10-22T13:10:42,785 INFO [HiveServer2-Background-Pool: Thread-886]: reexec.ReOptimizePlugin (:()) - ReOptimization: retryPossible: true 2019-10-22T13:10:42,786 INFO [HiveServer2-Background-Pool: Thread-886]: hooks.HiveProtoLoggingHook (:()) - Received post-hook notification for: hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:42,786 ERROR [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1571760131080_0019_1_00, diagnostics=[Task failed, taskId=task_1571760131080_0019_1_00_000380, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:777) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:313) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:95) at org.apache.orc.impl.WriterImpl.(WriterImpl.java:177) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.(WriterImpl.java:94) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.initWriter(OrcRecordUpdater.java:611) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSimpleEvent(OrcRecordUpdater.java:424) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSplitUpdateEvent(OrcRecordUpdater.java:433) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.insert(OrcRecordUpdater.java:485) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:999) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) , errorMessage=Cannot recover from this error:java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:777) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:313) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:95) at org.apache.orc.impl.WriterImpl.(WriterImpl.java:177) at org.apache.hadoop.hive.ql.io.orc.WriterImpl.(WriterImpl.java:94) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.initWriter(OrcRecordUpdater.java:611) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSimpleEvent(OrcRecordUpdater.java:424) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.addSplitUpdateEvent(OrcRecordUpdater.java:433) at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.insert(OrcRecordUpdater.java:485) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:999) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ]], Task failed, taskId=task_1571760131080_0019_1_00_000044, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) ... 19 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1045) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ... 20 more Caused by: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hdfs.ExceptionLastSeen.set(ExceptionLastSeen.java:45) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:829) Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DataStreamer.initDataStreaming(DataStreamer.java:633) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717) , errorMessage=Cannot recover from this error:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) ... 19 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1045) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126) at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:153) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555) ... 20 more Caused by: java.io.IOException: java.lang.OutOfMemoryError: unable to create new native thread at org.apache.hadoop.hdfs.ExceptionLastSeen.set(ExceptionLastSeen.java:45) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:829) Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.hdfs.DataStreamer.initDataStreaming(DataStreamer.java:633) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:717) ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:2 killedTasks:753, Vertex vertex_1571760131080_0019_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1571760131080_0019_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1571760131080_0019_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 4, vertexId=vertex_1571760131080_0019_1_03, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:162, Vertex vertex_1571760131080_0019_1_03 [Reducer 4] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 3, vertexId=vertex_1571760131080_0019_1_02, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:162, Vertex vertex_1571760131080_0019_1_02 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:3 2019-10-22T13:10:42,786 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Completed executing command(queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8); Time taken: 375.097 seconds 2019-10-22T13:10:42,787 INFO [HiveServer2-Background-Pool: Thread-886]: lockmgr.DbTxnManager (:()) - Stopped heartbeat for query: hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:42,814 INFO [HiveServer2-Background-Pool: Thread-886]: reexec.ReExecDriver (:()) - Preparing to re-execute query 2019-10-22T13:10:42,818 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Compiling command(queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8): from tpcds_text_503.store_sales ss insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is not null insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is null sort by ss.ss_sold_date_sk 2019-10-22T13:10:42,827 INFO [HiveServer2-Background-Pool: Thread-886]: lockmgr.DbTxnManager (:()) - Opened txnid:5297 2019-10-22T13:10:42,829 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Starting Semantic Analysis 2019-10-22T13:10:42,829 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Completed phase 1 of Semantic Analysis 2019-10-22T13:10:42,829 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for source tables 2019-10-22T13:10:42,846 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for subqueries 2019-10-22T13:10:42,846 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for destination tables 2019-10-22T13:10:42,855 WARN [HiveServer2-Background-Pool: Thread-886]: parse.BaseSemanticAnalyzer (:()) - Dynamic partitioning is used; only validating 0 columns 2019-10-22T13:10:42,863 WARN [HiveServer2-Background-Pool: Thread-886]: parse.BaseSemanticAnalyzer (:()) - Dynamic partitioning is used; only validating 0 columns 2019-10-22T13:10:42,863 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Completed getting MetaData in Semantic Analysis 2019-10-22T13:10:42,863 INFO [HiveServer2-Background-Pool: Thread-886]: parse.BaseSemanticAnalyzer (:()) - Not invoking CBO because the statement has sort by 2019-10-22T13:10:42,867 INFO [HiveServer2-Background-Pool: Thread-886]: common.FileUtils (FileUtils.java:mkdir(580)) - Creating directory if it doesn't exist: hdfs://hdpserver1.puretec.purestorage.com:8020/warehouse/tablespace/managed/hive/tpcds_bin_partitioned_orc_503.db/store_sales/.hive-staging_hive_2019-10-22_13-10-42_819_6209882792678573603-19 2019-10-22T13:10:42,879 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Generate an operator pipeline to autogather column stats for table store_sales in query from tpcds_text_503.store_sales ss insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is not null insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is null sort by ss.ss_sold_date_sk 2019-10-22T13:10:42,892 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for source tables 2019-10-22T13:10:42,907 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for subqueries 2019-10-22T13:10:42,907 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for destination tables 2019-10-22T13:10:42,910 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Context (:()) - New scratch dir is hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_879_7883816098866373796-19 2019-10-22T13:10:42,915 INFO [HiveServer2-Background-Pool: Thread-886]: common.FileUtils (FileUtils.java:mkdir(580)) - Creating directory if it doesn't exist: hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_879_7883816098866373796-19/-mr-10000/.hive-staging_hive_2019-10-22_13-10-42_879_7883816098866373796-19 2019-10-22T13:10:42,926 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Generate an operator pipeline to autogather column stats for table store_sales in query from tpcds_text_503.store_sales ss insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is not null insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is null sort by ss.ss_sold_date_sk 2019-10-22T13:10:42,940 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for source tables 2019-10-22T13:10:42,954 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for subqueries 2019-10-22T13:10:42,954 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Get metadata for destination tables 2019-10-22T13:10:42,957 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Context (:()) - New scratch dir is hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_926_6336321254095289376-19 2019-10-22T13:10:42,961 INFO [HiveServer2-Background-Pool: Thread-886]: common.FileUtils (FileUtils.java:mkdir(580)) - Creating directory if it doesn't exist: hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_926_6336321254095289376-19/-mr-10000/.hive-staging_hive_2019-10-22_13-10-42_926_6336321254095289376-19 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FS(3) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FS(10) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(9) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for GBY(8) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for RS(7) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for GBY(6) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(5) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(2) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FIL(1) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FS(15) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FS(22) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(21) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for GBY(20) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for RS(19) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for GBY(18) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(17) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(14) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for RS(13) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for SEL(12) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for FIL(11) 2019-10-22T13:10:42,975 INFO [HiveServer2-Background-Pool: Thread-886]: ppd.OpProcFactory (:()) - Processing for TS(0) 2019-10-22T13:10:42,980 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 7 oldColExprMap: {VALUE._col20=Column[_col21], VALUE._col10=Column[_col11], VALUE._col21=Column[_col22], VALUE._col11=Column[_col12], VALUE._col12=Column[_col13], KEY._col0=Column[_col0], VALUE._col2=Column[_col3], VALUE._col3=Column[_col4], VALUE._col4=Column[_col5], VALUE._col5=Column[_col6], VALUE._col0=Column[_col1], VALUE._col1=Column[_col2], VALUE._col13=Column[_col14], VALUE._col14=Column[_col15], VALUE._col15=Column[_col16], VALUE._col16=Column[_col17], VALUE._col6=Column[_col7], VALUE._col17=Column[_col18], VALUE._col7=Column[_col8], VALUE._col18=Column[_col19], VALUE._col8=Column[_col9], VALUE._col19=Column[_col20], VALUE._col9=Column[_col10]} 2019-10-22T13:10:42,980 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 7 newColExprMap: {VALUE._col20=Column[_col21], VALUE._col10=Column[_col11], VALUE._col21=Column[_col22], VALUE._col11=Column[_col12], VALUE._col12=Column[_col13], KEY._col0=Column[_col0], VALUE._col2=Column[_col3], VALUE._col3=Column[_col4], VALUE._col4=Column[_col5], VALUE._col5=Column[_col6], VALUE._col0=Column[_col1], VALUE._col1=Column[_col2], VALUE._col13=Column[_col14], VALUE._col14=Column[_col15], VALUE._col15=Column[_col16], VALUE._col16=Column[_col17], VALUE._col6=Column[_col7], VALUE._col17=Column[_col18], VALUE._col7=Column[_col8], VALUE._col18=Column[_col19], VALUE._col8=Column[_col9], VALUE._col19=Column[_col20], VALUE._col9=Column[_col10]} 2019-10-22T13:10:42,980 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 19 oldColExprMap: {VALUE._col20=Column[_col21], VALUE._col10=Column[_col11], VALUE._col21=Column[_col22], VALUE._col11=Column[_col12], VALUE._col12=Column[_col13], KEY._col0=Const bigint null, VALUE._col2=Column[_col3], VALUE._col3=Column[_col4], VALUE._col4=Column[_col5], VALUE._col5=Column[_col6], VALUE._col0=Column[_col1], VALUE._col1=Column[_col2], VALUE._col13=Column[_col14], VALUE._col14=Column[_col15], VALUE._col15=Column[_col16], VALUE._col16=Column[_col17], VALUE._col6=Column[_col7], VALUE._col17=Column[_col18], VALUE._col7=Column[_col8], VALUE._col18=Column[_col19], VALUE._col8=Column[_col9], VALUE._col19=Column[_col20], VALUE._col9=Column[_col10]} 2019-10-22T13:10:42,980 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 19 newColExprMap: {VALUE._col20=Column[_col21], VALUE._col10=Column[_col11], VALUE._col21=Column[_col22], VALUE._col11=Column[_col12], VALUE._col12=Column[_col13], KEY._col0=Const bigint null, VALUE._col2=Column[_col3], VALUE._col3=Column[_col4], VALUE._col4=Column[_col5], VALUE._col5=Column[_col6], VALUE._col0=Column[_col1], VALUE._col1=Column[_col2], VALUE._col13=Column[_col14], VALUE._col14=Column[_col15], VALUE._col15=Column[_col16], VALUE._col16=Column[_col17], VALUE._col6=Column[_col7], VALUE._col17=Column[_col18], VALUE._col7=Column[_col8], VALUE._col18=Column[_col19], VALUE._col8=Column[_col9], VALUE._col19=Column[_col20], VALUE._col9=Column[_col10]} 2019-10-22T13:10:42,981 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 13 oldColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], KEY.reducesinkkey0=Const bigint null, VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2019-10-22T13:10:42,981 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 13 newColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], KEY.reducesinkkey0=Const bigint null, VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2019-10-22T13:10:42,981 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 13 oldColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], KEY.reducesinkkey0=Const bigint null, VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2019-10-22T13:10:42,981 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.ColumnPrunerProcFactory (:()) - RS 13 newColExprMap: {VALUE._col20=Column[_col20], VALUE._col10=Column[_col10], VALUE._col21=Column[_col21], VALUE._col11=Column[_col11], VALUE._col12=Column[_col12], KEY.reducesinkkey0=Const bigint null, VALUE._col2=Column[_col2], VALUE._col3=Column[_col3], VALUE._col4=Column[_col4], VALUE._col5=Column[_col5], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], VALUE._col13=Column[_col13], VALUE._col14=Column[_col14], VALUE._col15=Column[_col15], VALUE._col16=Column[_col16], VALUE._col6=Column[_col6], VALUE._col17=Column[_col17], VALUE._col7=Column[_col7], VALUE._col18=Column[_col18], VALUE._col8=Column[_col8], VALUE._col19=Column[_col19], VALUE._col9=Column[_col9]} 2019-10-22T13:10:42,982 INFO [HiveServer2-Background-Pool: Thread-886]: correlation.AbstractCorrelationProcCtx (:()) - Overriding hive.optimize.reducededuplication.min.reducer to 1 due to a write to transactional table(s) tpcds_bin_partitioned_orc_503.store_sales,tpcds_bin_partitioned_orc_503.store_sales 2019-10-22T13:10:42,998 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.SetReducerParallelism (:()) - Set parallelism for reduce sink RS[13] to: 81 2019-10-22T13:10:42,998 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.SetReducerParallelism (:()) - Set parallelism for reduce sink RS[7] to: 1009 2019-10-22T13:10:42,998 INFO [HiveServer2-Background-Pool: Thread-886]: optimizer.SetReducerParallelism (:()) - Set parallelism for reduce sink RS[19] to: 81 2019-10-22T13:10:42,998 INFO [HiveServer2-Background-Pool: Thread-886]: correlation.AbstractCorrelationProcCtx (:()) - Overriding hive.optimize.reducededuplication.min.reducer to 1 due to a write to transactional table(s) tpcds_bin_partitioned_orc_503.store_sales,tpcds_bin_partitioned_orc_503.store_sales 2019-10-22T13:10:42,999 INFO [HiveServer2-Background-Pool: Thread-886]: parse.TezCompiler (:()) - Cycle free: true 2019-10-22T13:10:43,018 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Examining input format to see if vectorization is enabled. 2019-10-22T13:10:43,018 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Vectorization is enabled for input format(s) [org.apache.hadoop.mapred.TextInputFormat] 2019-10-22T13:10:43,018 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Validating and vectorizing MapWork... (vectorizedVertexNum 0) 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Vectorizer vectorizeOperator groupby typeName bigint 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Vectorizer vectorizeOperator reduce sink class VectorReduceSinkObjectHashOperator 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: reducesink.VectorReduceSinkObjectHashOperator (:()) - VectorReduceSinkObjectHashOperator constructor vectorReduceSinkInfo org.apache.hadoop.hive.ql.plan.VectorReduceSinkInfo@23b397bf 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map vectorization enabled: true 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map vectorized: false 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map vectorizedVertexNum: 0 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map enabledConditionsMet: [hive.vectorized.use.vector.serde.deserialize IS true] 2019-10-22T13:10:43,020 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Map inputFileFormatClassNameSet: [org.apache.hadoop.mapred.TextInputFormat] 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Validating and vectorizing ReduceWork... (vectorizedVertexNum 1) 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorization enabled: true 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorized: false 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorizedVertexNum: 1 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer hive.vectorized.execution.reduce.enabled: true 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer engine: tez 2019-10-22T13:10:43,021 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Validating and vectorizing ReduceWork... (vectorizedVertexNum 2) 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorization enabled: true 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorized: false 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorizedVertexNum: 2 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer hive.vectorized.execution.reduce.enabled: true 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer engine: tez 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Validating and vectorizing ReduceWork... (vectorizedVertexNum 3) 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorization enabled: true 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorized: false 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce notVectorizedReason: Aggregation Function expression for GROUPBY operator: UDF compute_stats not supported 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reduce vectorizedVertexNum: 3 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer hive.vectorized.execution.reduce.enabled: true 2019-10-22T13:10:43,022 INFO [HiveServer2-Background-Pool: Thread-886]: physical.Vectorizer (:()) - Reducer engine: tez 2019-10-22T13:10:43,023 INFO [HiveServer2-Background-Pool: Thread-886]: parse.CalcitePlanner (:()) - Completed plan generation 2019-10-22T13:10:43,023 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Semantic Analysis Completed (retrial = false) 2019-10-22T13:10:43,023 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:ss.ss_sold_time_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_item_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_customer_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_cdemo_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_hdemo_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_addr_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_store_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_promo_sk, type:bigint, comment:null), FieldSchema(name:ss.ss_ticket_number, type:bigint, comment:null), FieldSchema(name:ss.ss_quantity, type:int, comment:null), FieldSchema(name:ss.ss_wholesale_cost, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_list_price, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_sales_price, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_ext_discount_amt, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_ext_sales_price, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_ext_wholesale_cost, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_ext_list_price, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_ext_tax, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_coupon_amt, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_net_paid, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_net_paid_inc_tax, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_net_profit, type:decimal(7,2), comment:null), FieldSchema(name:ss.ss_sold_date_sk, type:bigint, comment:null)], properties:null) 2019-10-22T13:10:43,025 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Completed compiling command(queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8); Time taken: 0.209 seconds 2019-10-22T13:10:43,026 INFO [HiveServer2-Background-Pool: Thread-886]: reexec.ReOptimizePlugin (:()) - planDidChange: true 2019-10-22T13:10:43,026 INFO [HiveServer2-Background-Pool: Thread-886]: reexec.ReExecDriver (:()) - Execution #2 of query 2019-10-22T13:10:43,026 INFO [HiveServer2-Background-Pool: Thread-886]: lockmgr.DbTxnManager (:()) - Setting lock request transaction to txnid:5297 for queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:43,026 INFO [HiveServer2-Background-Pool: Thread-886]: lockmgr.DbLockManager (:()) - Requesting: queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 LockRequest(component:[LockComponent(type:EXCLUSIVE, level:TABLE, dbname:tpcds_bin_partitioned_orc_503, tablename:store_sales, operationType:UPDATE, isTransactional:true, isDynamicPartitionWrite:true)], txnid:5297, user:hive, hostname:hdpserver11.puretec.purestorage.com, agentInfo:hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8) 2019-10-22T13:10:43,032 INFO [HiveServer2-Background-Pool: Thread-886]: lockmgr.DbLockManager (:()) - Response to queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 LockResponse(lockid:2630, state:ACQUIRED) 2019-10-22T13:10:43,035 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Executing command(queryId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8): from tpcds_text_503.store_sales ss insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is not null insert overwrite table store_sales partition (ss_sold_date_sk) select ss.ss_sold_time_sk, ss.ss_item_sk, ss.ss_customer_sk, ss.ss_cdemo_sk, ss.ss_hdemo_sk, ss.ss_addr_sk, ss.ss_store_sk, ss.ss_promo_sk, ss.ss_ticket_number, ss.ss_quantity, ss.ss_wholesale_cost, ss.ss_list_price, ss.ss_sales_price, ss.ss_ext_discount_amt, ss.ss_ext_sales_price, ss.ss_ext_wholesale_cost, ss.ss_ext_list_price, ss.ss_ext_tax, ss.ss_coupon_amt, ss.ss_net_paid, ss.ss_net_paid_inc_tax, ss.ss_net_profit, ss.ss_sold_date_sk where ss.ss_sold_date_sk is null sort by ss.ss_sold_date_sk 2019-10-22T13:10:43,035 INFO [HiveServer2-Background-Pool: Thread-886]: hooks.HiveProtoLoggingHook (:()) - Received pre-hook notification for: hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:43,042 INFO [HiveServer2-Background-Pool: Thread-886]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 0012063a-1b33-4f8b-a0dd-36a26dbe2870 2019-10-22T13:10:43,046 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Query ID = hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:43,046 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Total jobs = 1 2019-10-22T13:10:43,046 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Launching Job 1 out of 1 2019-10-22T13:10:43,046 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Driver (:()) - Starting task [Stage-2:MAPRED] in serial mode 2019-10-22T13:10:43,063 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Context (:()) - New scratch dir is hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_819_6209882792678573603-19 2019-10-22T13:10:43,065 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionPoolManager (:()) - The current user: hive, session user: hive 2019-10-22T13:10:43,065 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionPoolManager (:()) - Current queue name is default incoming queue name is null 2019-10-22T13:10:43,065 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionPoolManager (:()) - Closing tez session if not default: sessionId=7c8c8a8d-7800-40dc-84a4-ce9e74d43a70, queueName=default, user=hive, doAs=false, isOpen=true, isDefault=false 2019-10-22T13:10:43,065 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionState (:()) - Closing Tez Session 2019-10-22T13:10:43,065 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Shutting down Tez Session, sessionName=HIVE-7c8c8a8d-7800-40dc-84a4-ce9e74d43a70, applicationId=application_1571760131080_0019 2019-10-22T13:10:43,078 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionState (:()) - Attemting to clean up resources for 7c8c8a8d-7800-40dc-84a4-ce9e74d43a70: hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/_tez_session_dir/7c8c8a8d-7800-40dc-84a4-ce9e74d43a70-resources; 0 additional files, 1 localized resources 2019-10-22T13:10:43,079 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionPoolManager (:()) - QueueName: null nonDefaultUser: false defaultQueuePool: null hasInitialSessions: false 2019-10-22T13:10:43,079 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionPoolManager (:()) - Created new tez session for queue: null with session id: 14ddd537-5f58-4c12-a38c-32da60267c34 2019-10-22T13:10:43,079 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - Subscribed to counters: [] for queryId: hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 2019-10-22T13:10:43,079 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - Tez session hasn't been created yet. Opening session 2019-10-22T13:10:43,080 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionState (:()) - User of session id 14ddd537-5f58-4c12-a38c-32da60267c34 is hive 2019-10-22T13:10:43,085 INFO [HiveServer2-Background-Pool: Thread-886]: tez.DagUtils (:()) - Localizing resource because it does not exist: file:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar to dest: hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/_tez_session_dir/14ddd537-5f58-4c12-a38c-32da60267c34-resources/hive-hcatalog-core.jar 2019-10-22T13:10:45,834 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.hdfs, interval=06:18.011 minutes, events=5, succcessCount=5, totalEvents=99, totalSuccessCount=99 2019-10-22T13:10:45,834 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1]: destination.HDFSAuditDestination (:()) - Flushing HDFS audit. Event Size:2 2019-10-22T13:10:45,834 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1]: destination.HDFSAuditDestination (:()) - Flush called. name=hiveServer2.async.multi_dest.batch.hdfs 2019-10-22T13:10:45,841 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1]: destination.HDFSAuditDestination (:()) - Flush HDFS audit logs completed..... 2019-10-22T13:10:49,533 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.011 minutes, events=1, failedCount=1, totalEvents=52, totalFailedCount=52 2019-10-22T13:10:49,539 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:10:49,539 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:10:49,539 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:10:49,539 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.008 minutes; 53 during process lifetime 2019-10-22T13:10:49,539 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:10:49,540 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:10:57,835 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch, finalDestination=hiveServer2.async.multi_dest.batch.hdfs, interval=01:00.003 minutes, events=4, succcessCount=2, totalEvents=434, totalSuccessCount=101 2019-10-22T13:11:12,172 INFO [HiveServer2-Background-Pool: Thread-886]: tez.DagUtils (:()) - Resource modification time: 1571764271833 for hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/_tez_session_dir/14ddd537-5f58-4c12-a38c-32da60267c34-resources/hive-hcatalog-core.jar 2019-10-22T13:11:12,172 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionState (:()) - Created new resources: null 2019-10-22T13:11:12,173 INFO [HiveServer2-Background-Pool: Thread-886]: tez.DagUtils (:()) - Jar dir is null / directory doesn't exist. Choosing HIVE_INSTALL_DIR - /user/hive/.hiveJars 2019-10-22T13:11:12,175 INFO [HiveServer2-Background-Pool: Thread-886]: tez.DagUtils (:()) - Resource modification time: 1568044224622 for hdfs://hdpserver1.puretec.purestorage.com:8020/user/hive/.hiveJars/hive-exec-3.1.0.3.1.4.0-315-effe339ef81326ce093bc0a1516e86d9d189e11126de97212254c005859b949e.jar 2019-10-22T13:11:12,192 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Tez Client Version: [ component=tez-api, version=0.9.1.3.1.4.0-315, revision=bf7ec283a02d61c53072aeed00b22db3fe39565f, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=2019-08-23T05:02:43Z ] 2019-10-22T13:11:12,192 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezSessionState (:()) - Opening new Tez Session (id: 14ddd537-5f58-4c12-a38c-32da60267c34, scratch dir: hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/_tez_session_dir/14ddd537-5f58-4c12-a38c-32da60267c34) 2019-10-22T13:11:12,204 INFO [HiveServer2-Background-Pool: Thread-886]: client.RMProxy (:()) - Connecting to ResourceManager at hdpserver1.puretec.purestorage.com/10.21.236.151:8050 2019-10-22T13:11:12,204 INFO [HiveServer2-Background-Pool: Thread-886]: client.AHSProxy (:()) - Connecting to Application History server at hdpserver10.puretec.purestorage.com/10.21.236.160:10200 2019-10-22T13:11:12,204 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Session mode. Starting session. 2019-10-22T13:11:12,206 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClientUtils (:()) - Using tez.lib.uris value from configuration: /hdp/apps/3.1.4.0-315/tez/tez.tar.gz 2019-10-22T13:11:12,206 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClientUtils (:()) - Using tez.lib.uris.classpath value from configuration: null 2019-10-22T13:11:12,215 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Tez system stage directory hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/_tez_session_dir/14ddd537-5f58-4c12-a38c-32da60267c34/.tez/application_1571760131080_0020 doesn't exist and is created 2019-10-22T13:11:12,454 INFO [HiveServer2-Background-Pool: Thread-886]: impl.YarnClientImpl (:()) - Submitted application application_1571760131080_0020 2019-10-22T13:11:12,456 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - The url to track the Tez Session: http://hdpserver1.puretec.purestorage.com:8088/proxy/application_1571760131080_0020/ 2019-10-22T13:11:15,530 INFO [HiveServer2-Background-Pool: Thread-886]: tez.TezTask (:()) - Dag name: from tpcds_text_503.sto...ss.ss_sold_date_sk (Stage-2) 2019-10-22T13:11:15,534 INFO [HiveServer2-Background-Pool: Thread-886]: exec.SerializationUtilities (:()) - Serializing ReduceWork using kryo 2019-10-22T13:11:15,538 INFO [HiveServer2-Background-Pool: Thread-886]: exec.Utilities (:()) - Serialized plan (via RPC) - name: Reducer 4 size: 4.97KB 2019-10-22T13:11:15,545 INFO [HiveServer2-Background-Pool: Thread-886]: fs.FSStatsPublisher (:()) - created : hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_926_6336321254095289376-19/-mr-10000/.hive-staging_hive_2019-10-22_13-10-42_926_6336321254095289376-19/-ext-10002 2019-10-22T13:11:15,551 INFO [HiveServer2-Background-Pool: Thread-886]: exec.SerializationUtilities (:()) - Serializing ReduceWork using kryo 2019-10-22T13:11:15,553 INFO [HiveServer2-Background-Pool: Thread-886]: exec.Utilities (:()) - Serialized plan (via RPC) - name: Reducer 3 size: 7.08KB 2019-10-22T13:11:15,559 INFO [HiveServer2-Background-Pool: Thread-886]: fs.FSStatsPublisher (:()) - created : hdfs://hdpserver1.puretec.purestorage.com:8020/warehouse/tablespace/managed/hive/tpcds_bin_partitioned_orc_503.db/store_sales/.hive-staging_hive_2019-10-22_13-10-42_819_6209882792678573603-19/-ext-10003 2019-10-22T13:11:15,565 INFO [HiveServer2-Background-Pool: Thread-886]: exec.SerializationUtilities (:()) - Serializing ReduceWork using kryo 2019-10-22T13:11:15,566 INFO [HiveServer2-Background-Pool: Thread-886]: exec.Utilities (:()) - Serialized plan (via RPC) - name: Reducer 2 size: 4.96KB 2019-10-22T13:11:15,572 INFO [HiveServer2-Background-Pool: Thread-886]: fs.FSStatsPublisher (:()) - created : hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_879_7883816098866373796-19/-mr-10000/.hive-staging_hive_2019-10-22_13-10-42_879_7883816098866373796-19/-ext-10002 2019-10-22T13:11:15,577 INFO [HiveServer2-Background-Pool: Thread-886]: ql.Context (:()) - New scratch dir is hdfs://hdpserver1.puretec.purestorage.com:8020/tmp/hive/hive/0012063a-1b33-4f8b-a0dd-36a26dbe2870/hive_2019-10-22_13-10-42_819_6209882792678573603-19 2019-10-22T13:11:15,579 INFO [HiveServer2-Background-Pool: Thread-886]: tez.DagUtils (:()) - Vertex has custom input? false 2019-10-22T13:11:15,579 INFO [HiveServer2-Background-Pool: Thread-886]: exec.SerializationUtilities (:()) - Serializing MapWork using kryo 2019-10-22T13:11:15,582 INFO [HiveServer2-Background-Pool: Thread-886]: exec.Utilities (:()) - Serialized plan (via RPC) - name: Map 1 size: 10.05KB 2019-10-22T13:11:15,592 INFO [HiveServer2-Background-Pool: Thread-886]: fs.FSStatsPublisher (:()) - created : hdfs://hdpserver1.puretec.purestorage.com:8020/warehouse/tablespace/managed/hive/tpcds_bin_partitioned_orc_503.db/store_sales/.hive-staging_hive_2019-10-22_13-10-42_819_6209882792678573603-19/-ext-10001 2019-10-22T13:11:15,598 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Submitting dag to TezSession, sessionName=HIVE-14ddd537-5f58-4c12-a38c-32da60267c34, applicationId=application_1571760131080_0020, dagName=from tpcds_text_503.sto...ss.ss_sold_date_sk (Stage-2), callerContext={ context=HIVE, callerType=HIVE_QUERY_ID, callerId=hive_20191022130427_0ce7c561-6bac-4787-8690-478a65447af8 } 2019-10-22T13:11:15,897 INFO [HiveServer2-Background-Pool: Thread-886]: client.TezClient (:()) - Submitted dag to TezSession, sessionName=HIVE-14ddd537-5f58-4c12-a38c-32da60267c34, applicationId=application_1571760131080_0020, dagId=dag_1571760131080_0020_1, dagName=from tpcds_text_503.sto...ss.ss_sold_date_sk (Stage-2) 2019-10-22T13:11:16,839 INFO [HiveServer2-Background-Pool: Thread-886]: SessionState (:()) - Status: Running (Executing on YARN cluster with App id application_1571760131080_0020) 2019-10-22T13:11:16,854 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: -/- Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:17,363 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:20,525 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:21,072 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+39)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:21,632 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+137)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:22,157 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+272)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:22,678 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+334)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:23,205 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+365)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:23,731 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+378)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:24,268 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+379)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:27,424 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+379)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:30,666 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+379)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:33,973 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+379)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:34,512 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:37,681 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:40,706 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:43,733 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:46,755 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:49,541 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.009 minutes, events=1, failedCount=1, totalEvents=53, totalFailedCount=53 2019-10-22T13:11:49,550 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:11:49,550 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:11:49,550 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:11:49,550 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.011 minutes; 54 during process lifetime 2019-10-22T13:11:49,550 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:11:49,552 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:11:49,781 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:52,807 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:55,829 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:11:56,939 INFO [Heartbeater-1]: lockmgr.DbTxnManager (:()) - Sending heartbeat for txnid:5297 and lockid:0 queryId=null txnid:0 2019-10-22T13:11:56,940 INFO [Heartbeater-1]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://hdpserver11.puretec.purestorage.com:9083 2019-10-22T13:11:56,941 INFO [Heartbeater-1]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 8 2019-10-22T13:11:56,943 INFO [Heartbeater-1]: metastore.HiveMetaStoreClient (:()) - Connected to metastore. 2019-10-22T13:11:56,943 INFO [Heartbeater-1]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=24 delay=5 lifetime=0 2019-10-22T13:11:58,853 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:01,878 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:04,902 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:07,927 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:10,950 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:13,972 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:16,995 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:20,017 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:23,041 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:26,063 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:29,088 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:32,112 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:35,136 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:38,168 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:41,212 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:44,235 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:47,259 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:49,552 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.011 minutes, events=1, failedCount=1, totalEvents=54, totalFailedCount=54 2019-10-22T13:12:49,559 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:12:49,559 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:12:49,559 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:12:49,560 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.009 minutes; 55 during process lifetime 2019-10-22T13:12:49,560 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:12:49,560 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:12:50,280 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:53,302 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:56,323 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:12:59,346 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:02,368 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:05,390 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:08,412 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:11,435 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:14,457 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:17,479 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:20,503 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:23,525 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:26,551 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:29,574 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:32,599 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:33,411 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.stats.fetch.partition.stats does not exist 2019-10-22T13:13:33,411 WARN [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:initialize(5424)) - HiveConf of name hive.heapsize does not exist 2019-10-22T13:13:33,427 INFO [HiveServer2-Handler-Pool: Thread-133]: thrift.ThriftCLIService (:()) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10 2019-10-22T13:13:33,515 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,543 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created local directory: /tmp/hive/6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,574 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Created HDFS directory: /tmp/hive/hive/6c5c586b-3b78-4b71-a692-e058a59c47fd/_tmp_space.db 2019-10-22T13:13:33,575 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Trying to connect to metastore with URI thrift://hdpserver11.puretec.purestorage.com:9083 2019-10-22T13:13:33,576 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Opened a connection to metastore, current connections: 9 2019-10-22T13:13:33,577 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Connected to metastore. 2019-10-22T13:13:33,577 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.RetryingMetaStoreClient (:()) - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=24 delay=5 lifetime=0 2019-10-22T13:13:33,588 INFO [HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is created: /tmp/hive/operation_logs/6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,588 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session opened, SessionHandle [6c5c586b-3b78-4b71-a692-e058a59c47fd], current sessions:2 2019-10-22T13:13:33,664 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,664 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,664 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,665 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,686 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,686 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,686 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,686 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,691 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,692 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,692 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,692 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,720 INFO [HiveServer2-Handler-Pool: Thread-133]: service.CompositeService (:()) - Session closed, SessionHandle [6c5c586b-3b78-4b71-a692-e058a59c47fd], current sessions:1 2019-10-22T13:13:33,720 INFO [HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,720 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Updating thread name to 6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,721 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: session.HiveSessionImpl (:()) - Operation log session directory is deleted: /tmp/hive/operation_logs/6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,721 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: conf.HiveConf (HiveConf.java:getLogIdVar(5244)) - Using the default value passed in for log id: 6c5c586b-3b78-4b71-a692-e058a59c47fd 2019-10-22T13:13:33,721 INFO [6c5c586b-3b78-4b71-a692-e058a59c47fd HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-133 2019-10-22T13:13:33,723 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/hive/6c5c586b-3b78-4b71-a692-e058a59c47fd on fs with scheme hdfs 2019-10-22T13:13:33,723 INFO [HiveServer2-Handler-Pool: Thread-133]: session.SessionState (:()) - Deleted directory: /tmp/hive/6c5c586b-3b78-4b71-a692-e058a59c47fd on fs with scheme file 2019-10-22T13:13:33,723 INFO [HiveServer2-Handler-Pool: Thread-133]: metastore.HiveMetaStoreClient (:()) - Closed a connection to metastore, current connections: 8 2019-10-22T13:13:35,633 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:38,658 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:41,681 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:44,705 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:47,730 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:49,561 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.009 minutes, events=1, failedCount=1, totalEvents=55, totalFailedCount=55 2019-10-22T13:13:49,568 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:13:49,568 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:13:49,568 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:13:49,568 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.009 minutes; 56 during process lifetime 2019-10-22T13:13:49,568 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:13:49,569 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:13:50,755 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:53,780 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:56,803 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:13:59,828 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:02,852 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:05,875 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:08,900 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:11,924 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:14,949 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:17,972 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:20,996 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:24,020 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:26,939 INFO [Heartbeater-1]: lockmgr.DbTxnManager (:()) - Sending heartbeat for txnid:5297 and lockid:0 queryId=null txnid:0 2019-10-22T13:14:27,043 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:30,066 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:33,085 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:36,102 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:39,125 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:42,148 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:45,169 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:48,191 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:49,569 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Audit Status Log: name=hiveServer2.async.multi_dest.batch.solr, interval=01:00.008 minutes, events=1, failedCount=1, totalEvents=56, totalFailedCount=56 2019-10-22T13:14:49,577 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - Request to collection [ranger_audits] failed due to (500) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index, retry=0 commError=false errorCode=500 2019-10-22T13:14:49,577 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: impl.CloudSolrClient (:()) - request was not communication error it seems 2019-10-22T13:14:49,578 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - failed to log audit event: {"repoType":3,"repo":"purecluster_hive","reqUser":"hive","evtTime":"2019-10-09 13:03:39.511","access":"USE","resType":"@null","action":"_any","result":1,"agent":"hiveServer2","policy":13,"enforcer":"ranger-acl","sess":"b34a2404-329d-434c-9474-f0246ef3920a","cliType":"HIVESERVER2","cliIP":"10.21.236.151","reqData":"show databases","agentHost":"hdpserver11.puretec.purestorage.com","logType":"RangerAudit","id":"7cac16eb-c10d-4f21-80e2-c3e9147bb620-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":10.21.236.151, \"forwarded-ip-addresses\":[]","cluster_name":"purecluster","policy_version":1} org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) ~[?:?] at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71) ~[?:?] at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:35) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil$1.run(SolrAppUtil.java:32) ~[?:?] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?] at org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:516) ~[?:?] at org.apache.ranger.audit.utils.SolrAppUtil.addDocsToSolr(SolrAppUtil.java:32) ~[?:?] at org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:232) ~[?:?] at org.apache.ranger.audit.provider.BaseAuditHandler.logJSON(BaseAuditHandler.java:172) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827) ~[?:?] at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) ~[?:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hdpserver1.puretec.purestorage.com:8886/solr/ranger_audits_shard1_replica_n1: Server error writing document id 7cac16eb-c10d-4f21-80e2-c3e9147bb620-0 to the index at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[?:?] at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) ~[?:?] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) ~[?:?] at org.apache.solr.client.solrj.impl.CloudSolrClient.lambda$directUpdate$0(CloudSolrClient.java:528) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] ... 1 more 2019-10-22T13:14:49,578 WARN [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: provider.BaseAuditHandler (:()) - Log failure count: 1 in past 01:00.010 minutes; 57 during process lifetime 2019-10-22T13:14:49,578 ERROR [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Error sending logs to consumer. provider=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:14:49,579 INFO [hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.solr_destWriter]: queue.AuditFileSpool (:()) - Destination is down. sleeping for 30000 milli seconds. indexQueue=2, queueName=hiveServer2.async.multi_dest.batch, consumer=hiveServer2.async.multi_dest.batch.solr 2019-10-22T13:14:51,219 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162 2019-10-22T13:14:54,244 INFO [HiveServer2-Background-Pool: Thread-886]: monitoring.RenderStrategy$LogToFileFunction (:()) - Map 1: 0(+380)/755 Reducer 2: 0/1009 Reducer 3: 0/162 Reducer 4: 0/162