Member since
03-27-2015
25
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25088 | 04-17-2016 12:15 PM | |
5003 | 04-08-2016 07:05 AM |
05-17-2016
02:42 AM
When you say it is not working, what issue does it exhibit? For Hive on Spark you only need set the Execution Engine within Hive from MapReduce to Spark. You do need to consider Spark memory setting for Executors in the Spark Service and these must correlate to the YARN container memory settings. Generally I set the following YARN container settings: yarn.nodemanager.resource.memory-mb yarn.scheduler.maximum-allocation-mb To be the same value but greater than the Spark Executor Memory + Overhead . Check also for the following similar error in the YARN logs: 15/09/17 11:15:09 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2211 MB per container) Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (2048+384 MB) is above the max threshold (2211 MB) of this cluster! Regards Shailesh
... View more
05-12-2016
01:51 AM
The YARN logs contained errors that complained about the memory deficiencies when I selected the Spark Engine for Hive. And I noticed the Executor Memory Size + Overhead for Spark defaults was larger than the YARN container memory settings. Increasing the YARN Container Memory configuration cured the problem or alternatively you could lower the Spark Executor requirements.
... View more
04-17-2016
12:15 PM
1 Kudo
The YARN Container Memory was smaller than the Spark Executor requirement. I set the YARN Container memory and maximum to be greater than Spark Executor Memory + Overhead. Check 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
... View more
04-16-2016
09:55 AM
I have enabled Spark as the default execution engine on Hive on CDH 5.7 but get the following when I execute a query against Hive from my edge node. Is there anything I need to enable on my client edge node. I can run the spark-shell and have exported SPARK_HOME. Also copied Client Config to edge node. Is there anything else I need to enable/configure? ERROR : Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client. at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1774) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1531) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1311) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178) at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72) at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Cancel client '478049ac-228c-4abb-8ef3-93157822a0a1'. Error: Child process exited before connecting back at com.google.common.base.Throwables.propagate(Throwables.java:156) at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:111) at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:98) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:94) at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:63) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62) ... 22 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Cancel client '478049ac-228c-4abb-8ef3-93157822a0a1'. Error: Child process exited before connecting back at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37) at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:101) ... 27 more Caused by: java.lang.RuntimeException: Cancel client '478049ac-228c-4abb-8ef3-93157822a0a1'. Error: Child process exited before connecting back at org.apache.hive.spark.client.rpc.RpcServer.cancelClient(RpcServer.java:179) at org.apache.hive.spark.client.SparkClientImpl$3.run(SparkClientImpl.java:450) ... 1 more
... View more
Labels:
04-08-2016
07:05 AM
I have fixed this by deleting the original principals from the MIT KDC and Generate Missing Credentials from the Administration->Security->Kerberos Credentials page. Add new Solr Instances as required and this will create a new Principal and Keytab as needed. When the instances were created as part of original Cluster build the principals were invalid. They may been left from a previous build or created in invalid state. Deleting them from the KDC seems to fix this.
... View more
04-07-2016
01:40 PM
I have Solr instance which is installed on a Master/Name Node which runs fine. I subsequently installed another Solr instance on a Slave/Data Node. It starts fine but then minuted later fails with: Solr Server API Liveness The Cloudera Manager Agent is not able to communicate with this Solr Server over the HTTP API. Web Server Status The Cloudera Manager Agent is not able to communicate with this role's web server. I found the following in the cloudera-scm-agent.log file: HTTPError: HTTP Error 401: Unauthorized [07/Apr/2016 21:59:20 +0000] 1515 Monitor-SolrServerMonitor urllib2_kerberos CRITICAL GSSAPI Error: Unspecified GSS failure. Minor code may provide more information/Server krbtgt/DOMAIN.COM@HADOOPSS.DOMAIN.COM not found in Kerberos database [07/Apr/2016 21:59:20 +0000] 1515 Monitor-SolrServerMonitor url_util ERROR Autentication error on attempt 2. Retrying after sleeping 1.000000 seconds.
... View more
Labels:
03-13-2016
03:10 PM
Excellent that solved the problem along with some unknown fields in the schema.xml where I have now used sanitizeUnknownSolrFields in the Morphline . Thanks for your help.
... View more
03-13-2016
01:59 PM
Looking at the Reducer logs I see the following error which refers to collection1 which I don't have in any of my SolrCores. Does this imply Zookeeper config corruption? I also tried but failed to initialise Solr config in ZooKeeper using CM Solr Initialise, solrctl init --force and zookeeper-client rmr /solr. But non have successfully supported the initialisation process for Solr. Also removed the usercache on all the NodeManager/DataNode nodes but this didn't cure the problem. 6588 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl 6591 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Using this unpacked directory as solr home: /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip 6592 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Creating embedded Solr server with solrHomeDir: /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip, fs: DFS[DFSClient[clientName=DFSClient_attempt_1457801342867_0014_r_000000_0_1776230373_1, ugi=shailesh (auth:SIMPLE)]], outputShardDir: hdfs://cloudman.sunnydale3.com:8020/user/shailesh/outdir/reducers/_temporary/1/_temporary/attempt_1457801342867_0014_r_000000_0/part-r-00000 6590 [Thread-22] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running 6609 [Thread-22] INFO org.apache.solr.hadoop.HeartBeater - Issuing heart beat for 1 threads 6625 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/' 6896 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Constructed instance information solr.home /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip (/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip), instance dir /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/, conf dir /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/conf/, writing index to solr.data.dir hdfs://cloudman.sunnydale3.com:8020/user/shailesh/outdir/reducers/_temporary/1/_temporary/attempt_1457801342867_0014_r_000000_0/part-r-00000/data, with permdir hdfs://cloudman.sunnydale3.com:8020/user/shailesh/outdir/reducers/_temporary/1/_temporary/attempt_1457801342867_0014_r_000000_0/part-r-00000 6908 [main] INFO org.apache.solr.core.ConfigSolr - Loading container configuration from /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/solr.xml 6912 [main] INFO org.apache.solr.core.ConfigSolr - /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/solr.xml does not exist, using default configuration 7184 [main] INFO org.apache.solr.core.CoreContainer - New CoreContainer 1025566832 7184 [main] INFO org.apache.solr.core.CoreContainer - Loading cores into CoreContainer [instanceDir=/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/] 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting socketTimeout to: 0 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting urlScheme to: null 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting connTimeout to: 0 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxConnectionsPerHost to: 20 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting corePoolSize to: 0 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maximumPoolSize to: 2147483647 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxThreadIdleTime to: 5 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting sizeOfQueue to: -1 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting fairnessPolicy to: false 7198 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting useRetries to: false 7362 [main] INFO org.apache.solr.logging.LogWatcher - SLF4J impl is org.slf4j.impl.Log4jLoggerFactory 7363 [main] INFO org.apache.solr.logging.LogWatcher - Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)] 7364 [main] INFO org.apache.solr.core.CoreContainer - Host Name: 7424 [coreLoadExecutor-5-thread-1] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/collection1/' 7461 [coreLoadExecutor-5-thread-1] ERROR org.apache.solr.core.CoreContainer - Error creating core [collection1]: Could not load conf for core collection1: Error loading solr config from /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/collection1/conf/solrconfig.xml org.apache.solr.common.SolrException: Could not load conf for core collection1: Error loading solr config from /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/collection1/conf/solrconfig.xml at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:68) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:496) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:262) at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:256) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.solr.common.SolrException: Error loading solr config from /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/collection1/conf/solrconfig.xml at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:154) at org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:82) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:62) ... 7 more Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't find resource 'solrconfig.xml' in classpath or '/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/collection1/conf' at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:362) at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:308) at org.apache.solr.core.Config.<init>(Config.java:117) at org.apache.solr.core.Config.<init>(Config.java:87) at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:167) at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:145) ... 9 more 7466 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/' 7511 [main] INFO org.apache.solr.update.SolrIndexConfig - IndexWriter infoStream solr logging is enabled 7517 [main] INFO org.apache.solr.core.SolrConfig - Using Lucene MatchVersion: 4.10.3 7619 [main] INFO org.apache.solr.core.Config - Loaded SolrConfig: solrconfig.xml 7634 [main] INFO org.apache.solr.schema.IndexSchema - Reading Solr Schema from /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/conf/schema.xml 7647 [main] INFO org.apache.solr.schema.IndexSchema - [core1] Schema name=example 7865 [main] INFO org.apache.solr.schema.IndexSchema - unique key field: id 8063 [main] INFO org.apache.solr.core.ConfigSetProperties - Did not find ConfigSet properties, assuming default properties: Can't find resource 'configsetprops.json' in classpath or '/yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/conf' 8064 [main] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'core1' using configuration from instancedir /yarn/nm/usercache/shailesh/appcache/application_1457801342867_0014/container_1457801342867_0014_01_000020/39534f41-0b65-4b38-9d02-52c443e85da4.solr.zip/ 8120 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Solr Kerberos Authentication disabled
... View more
03-12-2016
09:41 AM
I used the tutorial to set up the activity of using the MRIT. I used the example readJsonTweets.conf provided with the examples on CDH 5.6: morphlines : [
{
id : morphline1
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
commands : [
{
readJsonTestTweets {
isLengthDelimited : false
}
}
{ logDebug { format : "output record: {}", args : ["@{}"] } }
]
}
] The only thing that is missing is the SOLR_LOCATOR which I specify on MapReduceIndexerTool command line. Does that make a difference?
... View more
03-11-2016
04:36 PM
I have docsWritten = 0 on Reducer log below but I don't know why. Btw I am trying this with a single tweet at the moment. This is what I see on Mapper task: Log Type: stdout Log Upload Time: Fri Mar 11 23:43:11 +0000 2016 Log Length: 1843 3247 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Using this unpacked directory as solr home: /yarn/nm/usercache/shailesh/appcache/application_1457642318324_0008/container_1457642318324_0008_01_000002/70457759-3833-4f68-b75c-4456e6f9f0db.solr.zip 3249 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context 3249 [Thread-11] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running 3251 [Thread-11] INFO org.apache.solr.hadoop.HeartBeater - heartbeat skipped count 0 3295 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/yarn/nm/usercache/shailesh/appcache/application_1457642318324_0008/container_1457642318324_0008_01_000002/70457759-3833-4f68-b75c-4456e6f9f0db.solr.zip/' 3830 [main] INFO org.apache.solr.update.SolrIndexConfig - IndexWriter infoStream solr logging is enabled 3837 [main] INFO org.apache.solr.core.SolrConfig - Using Lucene MatchVersion: 4.10.3 3969 [main] INFO org.apache.solr.core.Config - Loaded SolrConfig: solrconfig.xml 3981 [main] INFO org.apache.solr.schema.IndexSchema - Reading Solr Schema from /yarn/nm/usercache/shailesh/appcache/application_1457642318324_0008/container_1457642318324_0008_01_000002/70457759-3833-4f68-b75c-4456e6f9f0db.solr.zip/conf/schema.xml 3995 [main] INFO org.apache.solr.schema.IndexSchema - [null] Schema name=example 4219 [main] INFO org.apache.solr.schema.IndexSchema - unique key field: id 4443 [main] INFO org.kitesdk.morphline.api.MorphlineContext - Importing commands 6557 [main] INFO org.kitesdk.morphline.api.MorphlineContext - Done importing commands 6793 [main] INFO org.apache.solr.hadoop.morphline.MorphlineMapRunner - Processing file hdfs://cloudman.sunnydale3.com:8020/user/shailesh/indir/tiny.data And on Reducer and it does say docsWritten is 0: 5895 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: flush at getReader 5895 [main] INFO org.apache.solr.update.LoggingInfoStream - [DW][main]: startFullFlush 5895 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: apply all deletes during flush 5895 [main] INFO org.apache.solr.update.LoggingInfoStream - [BD][main]: prune sis=segments_1: minGen=9223372036854775807 packetCount=0 5897 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: return reader version=1 reader=StandardDirectoryReader(segments_1:1:nrt) 5898 [main] INFO org.apache.solr.update.LoggingInfoStream - [DW][main]: main finishFullFlush success=true 5898 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: getReader took 3 msec 5914 [main] WARN org.apache.solr.rest.ManagedResourceStorage - Cannot write to config directory /yarn/nm/usercache/shailesh/appcache/application_1457642318324_0008/container_1457642318324_0008_01_000003/70457759-3833-4f68-b75c-4456e6f9f0db.solr.zip/conf; switching to use InMemory storage instead. 5915 [main] INFO org.apache.solr.rest.RestManager - Initializing RestManager with initArgs: {} 5932 [main] INFO org.apache.solr.rest.ManagedResourceStorage - Reading _rest_managed.json using InMemoryStorage 5932 [main] WARN org.apache.solr.rest.ManagedResource - No stored data found for /rest/managed 5937 [main] INFO org.apache.solr.rest.ManagedResourceStorage - Saved JSON object to path _rest_managed.json using InMemoryStorage 5937 [main] INFO org.apache.solr.rest.RestManager - Initializing 0 registered ManagedResources 5958 [main] INFO org.apache.solr.core.CoreContainer - registering core: core1 5977 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context 5977 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - docsWritten: 0 5978 [Thread-32] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running 5979 [Thread-32] INFO org.apache.solr.hadoop.HeartBeater - heartbeat skipped count 0 5999 [main] INFO org.apache.solr.update.UpdateHandler - start commit{,optimize=false,openSearcher=true,waitSearcher=false,expungeDeletes=false,softCommit=false,prepareCommit=false} 6000 [main] INFO org.apache.solr.update.UpdateHandler - No uncommitted changes. Skipping IW.commit. 6003 [main] INFO org.apache.solr.update.UpdateHandler - end_commit_flush 6011 [main] INFO org.apache.solr.hadoop.BatchWriter - Optimizing Solr: forcing merge down to 1 segments 6011 [main] INFO org.apache.solr.update.UpdateHandler - start commit{,optimize=true,openSearcher=true,waitSearcher=false,expungeDeletes=false,softCommit=false,prepareCommit=false} 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: forceMerge: index now 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: now flush at forceMerge 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: start flush: applyAllDeletes=true 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: index before flush 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [DW][main]: startFullFlush 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [DW][main]: main finishFullFlush success=true 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: apply all deletes during flush 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [BD][main]: prune sis=segments_1: minGen=9223372036854775807 packetCount=0 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [TMP][main]: findForcedMerges maxSegmentCount=1 infos= segmentsToMerge={} 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [CMS][main]: now merge 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [CMS][main]: index: 6012 [main] INFO org.apache.solr.update.LoggingInfoStream - [CMS][main]: no more merges pending; now return 6012 [main] INFO org.apache.solr.update.UpdateHandler - No uncommitted changes. Skipping IW.commit. 6017 [main] INFO org.apache.solr.update.UpdateHandler - end_commit_flush
... View more