Member since
03-17-2017
32
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2389 | 04-18-2017 06:16 AM | |
36729 | 04-10-2017 08:29 AM | |
32748 | 04-03-2017 07:53 AM |
04-07-2017
08:33 AM
We passed the file_length error, however I got the following error: 1205 [main] INFO org.apache.solr.hadoop.MapReduceIndexerTool - Indexing 22 files using 22 real mappers into 6 reducers Container [pid=4420,containerID=container_1491337377676_0040_01_000027] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 1.6 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1491337377676_0040_01_000027 :..... ..... Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Any suggestions on how to get around this error. Is there some config changes that I need to do. Thanks a lot for the help.
... View more
04-07-2017
07:13 AM
1) collection name is : party_name I don't have such a file:>>>> - indexer configuration used (indexer_def.xml ?), 2) schema.xml ( I removed the comments) <?xml version="1.0" encoding="UTF-8" ?> <schema name="example" version="1.5"> <field name="_version_" type="long" indexed="true" stored="true"/> <field name="_root_" type="string" indexed="true" stored="false"/> <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <dynamicField name="*_i" type="int" indexed="true" stored="true"/> <dynamicField name="*_is" type="int" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_s" type="string" indexed="true" stored="true" /> <dynamicField name="*_ss" type="string" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_l" type="long" indexed="true" stored="true"/> <dynamicField name="*_ls" type="long" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_t" type="text_general" indexed="true" stored="true"/> <dynamicField name="*_txt" type="text_general" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_en" type="text_en" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/> <dynamicField name="*_bs" type="boolean" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_f" type="float" indexed="true" stored="true"/> <dynamicField name="*_fs" type="float" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_d" type="double" indexed="true" stored="true"/> <dynamicField name="*_ds" type="double" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_coordinate" type="tdouble" indexed="true" stored="false" /> <dynamicField name="*_dt" type="date" indexed="true" stored="true"/> <dynamicField name="*_dts" type="date" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_p" type="location" indexed="true" stored="true"/> <!-- some trie-coded dynamic fields for faster range queries --> <dynamicField name="*_ti" type="tint" indexed="true" stored="true"/> <dynamicField name="*_tl" type="tlong" indexed="true" stored="true"/> <dynamicField name="*_tf" type="tfloat" indexed="true" stored="true"/> <dynamicField name="*_td" type="tdouble" indexed="true" stored="true"/> <dynamicField name="*_tdt" type="tdate" indexed="true" stored="true"/> <dynamicField name="*_c" type="currency" indexed="true" stored="true"/> <dynamicField name="ignored_*" type="ignored" multiValued="true"/> <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/> <dynamicField name="random_*" type="random" /> <uniqueKey>id</uniqueKey> <field name="county" type="text_general" indexed="false" stored="true"/> <field name="year" type="int" indexed="false" stored="true"/> <field name="court_type" type="text_general" indexed="false" stored="true"/> <field name="seq_num" type="int" indexed="false" stored="true"/> <field name="party_role" type="text_general" indexed="false" stored="true"/> <field name="party_num" type="int" indexed="false" stored="true"/> <field name="party_status" type="text_general" indexed="false" stored="true"/> <field name="biz_name" type="text_general" indexed="true" stored="true"/> <field name="prefix" type="text_general" indexed="false" stored="true"/> <field name="last_name" type="text_general" indexed="true" stored="true"/> <field name="first_name" type="text_general" indexed="true" stored="true"/> <field name="middle_name" type="text_general" indexed="true" stored="true"/> <field name="suffix" type="text_general" indexed="false" stored="true"/> <field name="in_regards_to" type="string" indexed="false" stored="true"/> <field name="case_status" type="string" indexed="false" stored="true"/> <field name="row_of_origin" type="string" indexed="false" stored="true"/> <fieldType name="string" class="solr.StrField" sortMissingLast="true" /> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/> <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/> <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/> <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/> <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/> <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/> <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/> <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/> <!-- A Trie based date field for faster date range queries and date faceting. --> <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/> <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings --> <fieldtype name="binary" class="solr.BinaryField"/> <fieldType name="random" class="solr.RandomSortField" indexed="true" /> <!-- A text field that only splits on whitespace for exact matching of words --> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> </analyzer> </fieldType> <!-- A text type for English text where stopwords and synonyms are managed using the REST API --> <fieldType name="managed_en" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.ManagedStopFilterFactory" managed="english" /> <filter class="solr.ManagedSynonymFilterFactory" managed="english" /> </analyzer> </fieldType> <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <!-- in this example, we will only use synonyms at query time <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> --> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> </analyzer> </fieldType> <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <!-- in this example, we will only use synonyms at query time <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> --> <!-- Case insensitive stop word removal. --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> </analyzer> </fieldType> <!-- Less flexible matching, but less false matches. Probably not ideal for product names, but may be good for SKUs. Can insert dashes in the wrong place and still match. --> <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.EnglishMinimalStemFilterFactory"/> <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes possible with WordDelimiterFilter in conjuncton with stemming. --> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType> <!-- Just like text_general except it reverses the characters of each token, to enable more efficient leading wildcard queries. --> <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true" maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true"> <analyzer> <!-- KeywordTokenizer does no actual tokenizing, so the entire input string is preserved as a single token --> <tokenizer class="solr.KeywordTokenizerFactory"/> <!-- The LowerCase TokenFilter does what you expect, which can be when you want your sorting to be case insensitive --> <filter class="solr.LowerCaseFilterFactory" /> <!-- The TrimFilter removes any leading or trailing whitespace --> <filter class="solr.TrimFilterFactory" /> <filter class="solr.PatternReplaceFilterFactory" pattern="([^a-z])" replacement="" replace="all" /> </analyzer> </fieldType> <fieldtype name="phonetic" stored="false" indexed="true" class="solr.TextField" > <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/> </analyzer> </fieldtype> <fieldtype name="payloads" stored="false" indexed="true" class="solr.TextField" > <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/> </analyzer> </fieldtype> <!-- lowercases the entire field value, keeping it as a single token. --> <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.KeywordTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory" /> </analyzer> </fieldType> <fieldType name="descendent_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> </analyzer> <analyzer type="query"> <tokenizer class="solr.KeywordTokenizerFactory" /> </analyzer> </fieldType> <!-- Example of using PathHierarchyTokenizerFactory at query time, so queries for paths match documents at that path, or in ancestor paths --> <fieldType name="ancestor_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.KeywordTokenizerFactory" /> </analyzer> <analyzer type="query"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> </analyzer> </fieldType> <!-- since fields of this type are by default not stored or indexed, any data added to them will be ignored outright. --> <fieldtype name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" /> <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/> <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. --> <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/> <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" distErrPct="0.025" maxDistErr="0.000009" units="degrees" /> <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has special relevancy modes: score=overlapRatio|area|area2D (local-param to the query). DocValues is required for relevancy. --> <fieldType name="bbox" class="solr.BBoxField" geo="true" units="degrees" numberType="_bbox_coord" /> <fieldType name="_bbox_coord" class="solr.TrieDoubleField" precisionStep="8" docValues="true" stored="false"/> <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" /> </schema> Morphline file SOLR_LOCATOR : { # Name of solr collection collection : party_name # ZooKeeper ensemble zkHost : "dwh-mst-dev02.stor.nccourts.org:2181/solr" # The maximum number of documents to send to Solr per network batch (throughput knob) # batchSize : 100 } sanitizeUnknownSolrFields { solrLocator : ${SOLR_LOCATOR} } morphlines : [ { id : morphline1 importCommands : ["org.kitesdk.**", "org.apache.solr.**"] commands : [ { readCSV { separator : "," columns : [id,county,year,court_type,seq_num,party_role,party_num,party_status,biz_name,prefix,last_name,first_name,middle_name,suffix,in_regards_to,case_status,row_of_origin] ignoreFirstLine : true trim : true charset : UTF-8 } } { logDebug { format : "output record: {}", args : ["@{}"] } } # load the record into a Solr server or MapReduce Reducer. { loadSolr { solrLocator : ${SOLR_LOCATOR} } } ] } ] Small csv: ------------- id,county,year,court_type,seq_num,party_role,party_num,party_status,biz_name,prefix,last_name,first_name,middle_name,suffix,in_regards_to,case_status,row_of_origin 1989-04-11 05:24:16.910647,750,1989,CVM,653,P,1,DISPOSED,BALFOUR GULF,null,null,null,null,null,null,null,T48 2002-02-08 11:42:52.758392,910,2001,CR,119164,P,1,DISPOSED,NC STATE OF,null,null,null,null,null,null,null,T48 1991-09-20 04:21:23.013509,420,1991,CVM,1324,D,1,DISPOSED,null,null,COXUM,BILLY,RAY,null,null,null,T48 2000-03-30 12:19:33.110602,790,2000,CVD,851,P,1,DISPOSED,ROWAN CO DEPT OF SOCIAL SERVICES OBO,null,null,null,null,null,null,null,T48 2016-02-05 16:41:24.154447,100,2016,E,241,E,1,DISPOSED,null,null,MCCULLAGH,SARAH,MARGARET,null,null,null,T48 1993-03-25 23:06:40.379315,520,1993,CVM,321,P,1,DISPOSED,null,null,MARKS,JEFF,null,null,null,null,T48 1997-03-12 08:54:26.444068,250,1989,CRS,7429,P,1,DISPOSED,NC STATE OF,null,null,null,null,null,null,null,T48 2008-12-05 16:01:47.230841,870,2008,CR,164,D,1,DISPOSED,null,null,BURNETTE,NATHAN,BROOKS,null,null,null,T48 1999-06-15 08:37:54.195413,280,1999,CR,2343,P,1,DISPOSED,NC STATE OF,null,null,null,null,null,null,null,T48 1995-10-18 13:23:14.241761,630,1995,CVM,5599,P,1,DISPOSED,TIM'S AUTO SALES,null,null,null,null,null,null,null,T48 1980-10-27 07:48:24.250937,030,1980,CVD,216,P,1,DISPOSED,null,null,HORNE,JENNY,G,null,null,null,T48 1999-09-13 16:57:51.275323,220,1999,M,248,D,1,DISPOSED,null,null,JACKSON,WENDELL,ANTHONY,null,null,null,T48 ------------- The command used: hadoop --config /etc/hadoop/conf.cloudera.yarn jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j ~/search/log4j.properties --morphline-file ~/search/readCSV.conf --output-dir hdfs://dwh-mst-dev02.stor.nccourts.org:8020/hdfs/data-lake/civil/solr/party-name --verbose --go-live --zk-host dwh-mst-dev02.stor.nccourts.org:2181/solr --collection party_name hdfs://dwh-mst-dev02.stor.nccourts.org:8020/hdfs/data-lake/test/party_search Thanks for the help.
... View more
04-07-2017
06:23 AM
I double checked my schema, the does not contain 'file_length' in it. I also added the following the statement below to my morphline scripts: sanitizeUnknownSolrFields { solrLocator : ${SOLR_LOCATOR} } Still received the same error. It is very frustrating, what I am doing should be pretty common and the problem has been solved by many before; not sure what is happening. I was able to use the same schema successfully on Solr 6.2 on my windows machine.. Help is appreciated.
... View more
04-06-2017
11:57 AM
I am running on CDH5.10. I am trying to index some csv files that reside on HDFS. I am using the MapReduceIndexerTool and following the tutorial MapReduce Batch Indexing with Cloudera Search. As advised by an earlier post I created a small subset of my data(only 500 records) and as advised also, I used the --dry-run option to make sure all is ok before the actual indexing takes place. My dry run runs successfully: -------- 5065 [main] INFO org.apache.solr.hadoop.MapReduceIndexerTool - Done. Indexing 1 files in dryrun mode took 0.21276237 secs 5065 [main] INFO org.apache.solr.hadoop.MapReduceIndexerTool - Success. Done. Program took 5.0973063 secs. Goodbye. ---------- Now when I switch to --go-live, I get the following exception: 1217 [main] INFO org.apache.solr.hadoop.MapReduceIndexerTool - Indexing 1 files using 1 real mappers into 1 reducers Error: java.io.IOException: Batch Write Failure at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:181) at org.apache.solr.hadoop.SolrRecordWriter.close(SolrRecordWriter.java:275) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.solr.common.SolrException: ERROR: [doc=1980-10-27 07:48:24.250937] unknown field 'file_length' at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78) at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:238) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:940) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1095) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:701) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2135) at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.apache.solr.hadoop.BatchWriter.runUpdate(BatchWriter.java:135) at org.apache.solr.hadoop.BatchWriter$Batch.run(BatchWriter.java:90) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:180) ... 9 more Error: java.io.IOException: Batch Write Failure at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:181) at org.apache.solr.hadoop.SolrRecordWriter.close(SolrRecordWriter.java:275) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.solr.common.SolrException: ERROR: [doc=1980-10-27 07:48:24.250937] unknown field 'file_length' at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78) at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:238) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:940) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1095) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:701) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2135) at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.apache.solr.hadoop.BatchWriter.runUpdate(BatchWriter.java:135) at org.apache.solr.hadoop.BatchWriter$Batch.run(BatchWriter.java:90) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:180) ... 9 more Error: java.io.IOException: Batch Write Failure at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:181) at org.apache.solr.hadoop.SolrRecordWriter.close(SolrRecordWriter.java:275) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.solr.common.SolrException: ERROR: [doc=1980-10-27 07:48:24.250937] unknown field 'file_length' at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78) at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:238) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:940) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1095) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:701) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2135) at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.apache.solr.hadoop.BatchWriter.runUpdate(BatchWriter.java:135) at org.apache.solr.hadoop.BatchWriter$Batch.run(BatchWriter.java:90) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:180) ... 9 more 46523 [main] ERROR org.apache.solr.hadoop.MapReduceIndexerTool - Job failed! jobName: org.apache.solr.hadoop.MapReduceIndexerTool/MorphlineMapper, jobId: job_1491337377676_0015 Help is apprecited.
... View more
Labels:
- Labels:
-
Cloudera Search
04-06-2017
11:18 AM
I am trying to index some data using Solrj. I have a very simple Solrj program: --------------- public static void main(final String[] args) throws SolrServerException, IOException { final String zkHostString = "dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr"; final CloudSolrServer solr = new CloudSolrServer(zkHostString); final UpdateRequest request = new UpdateRequest(); request.setAction(UpdateRequest.ACTION.COMMIT, true, true); request.setParam("collection", "party_name"); final SolrInputDocument doc = new SolrInputDocument(); final List<String> records = SolrJPopulater.loadSampleData(); for (final String record : records) { final String[] fields = record.split(","); doc.addField("id", fields[0]); doc.addField("county", fields[1]); doc.addField("year", Integer.parseInt(fields[2])); doc.addField("court_type", fields[3]); doc.addField("seq_num", Integer.parseInt(fields[4])); doc.addField("party_role", fields[5]); doc.addField("party_num", Integer.parseInt(fields[6])); doc.addField("party_status", fields[7]); doc.addField("biz_name", fields[8]); doc.addField("prefix", fields[9]); doc.addField("last_name", fields[10]); doc.addField("first_name", fields[11]); doc.addField("middle_name", fields[12]); doc.addField("suffix", fields[13]); doc.addField("in_regards_to", fields[14]); doc.addField("case_status", fields[15]); doc.addField("row_of_origin", fields[16]); final UpdateResponse response = solr.add(doc); System.out.println("status code=" + response.getStatus()); } solr.commit(); } The error is happening at this line: if (zk != null) zk.close(); -->> throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e); Below is the full trace. Thanks -------------- 2017-04-06 13:33:19,057 INFO main.logEnv - Client environment:user.dir=C:\cygwin\home\iapima\git\aoc-data-lake-hadoop 2017-04-06 13:33:19,059 INFO main.<init> - Initiating client connection, connectString=dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.ConnectionManager@18ef96 2017-04-06 13:33:19,067 DEBUG main.<clinit> - zookeeper.disableAutoWatchReset is false 2017-04-06 13:33:19,178 INFO main.waitForConnected - Waiting for client to connect to ZooKeeper 2017-04-06 13:33:19,183 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).logStartConnect - Opening socket connection to server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181. Will not attempt to authenticate using SASL (unknown error) 2017-04-06 13:33:19,185 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Socket connection established to dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, initiating session 2017-04-06 13:33:19,187 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).primeConnection - Session establishment request sent on dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181 2017-04-06 13:33:19,203 INFO main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).onConnected - Session establishment complete on server dwh-mst-dev01.stor.nccourts.org/10.91.62.104:2181, sessionid = 0x15ad7740271bb85, negotiated timeout = 10000 2017-04-06 13:33:19,205 INFO main-EventThread.process - Watcher org.apache.solr.common.cloud.ConnectionManager@18ef96 name:ZooKeeperConnection Watcher:dwh-mst-dev01.stor.nccourts.org:2181/solr,dwh-mst-dev02.stor.nccourts.org:2181/solr,dwh-mst-dev03.stor.nccourts.org:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None 2017-04-06 13:33:19,206 INFO main.waitForConnected - Client is connected to ZooKeeper 2017-04-06 13:33:19,206 INFO main.createZkACLProvider - Using default ZkACLProvider 2017-04-06 13:33:19,220 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740271bb85, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,55835606662,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-06 13:33:19,223 INFO main.makePath - makePath: /clusterstate.json 2017-04-06 13:33:19,226 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740271bb85, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,55835606662,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,F response:: 2017-04-06 13:33:19,349 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740271bb85, packet:: clientPath:null serverPath:null finished:false header:: 3,1 replyHeader:: 3,55835606663,-101 request:: '/solr%2Cdwh-mst-dev02.stor.nccourts.org:2181/solr%2Cdwh-mst-dev03.stor.nccourts.org:2181/solr/clusterstate.json,,v{s{31,s{'world,'anyone}}},0 response:: 2017-04-06 13:33:19,353 DEBUG main.close - Closing session: 0x15ad7740271bb85 2017-04-06 13:33:19,354 DEBUG main.close - Closing client for session: 0x15ad7740271bb85 2017-04-06 13:33:19,362 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).readResponse - Reading reply sessionid:0x15ad7740271bb85, packet:: clientPath:null serverPath:null finished:false header:: 4,-11 replyHeader:: 4,55835606664,0 request:: null response:: null 2017-04-06 13:33:19,363 DEBUG main.disconnect - Disconnecting client for session: 0x15ad7740271bb85 2017-04-06 13:33:19,363 DEBUG main-SendThread(dwh-mst-dev01.stor.nccourts.org:2181).run - An exception was thrown while closing send thread for session 0x15ad7740271bb85 : Unable to read additional data from server sessionid 0x15ad7740271bb85, likely server has closed socket 2017-04-06 13:33:19,363 INFO main-EventThread.run - EventThread shut down 2017-04-06 13:33:19,363 INFO main.close - Session: 0x15ad7740271bb85 closed Exception in thread "main" org.apache.solr.common.cloud.ZooKeeperException: at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:270) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:548) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102) at org.nccourts.hadoop.solr.SolrJPopulater.main(SolrJPopulater.java:52) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /clusterstate.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:507) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:504) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:461) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:448) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:435) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:94) at org.apache.solr.common.cloud.ZkCmdExecutor.ensureExists(ZkCmdExecutor.java:84) at org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:295) at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:261) ... 5 more
... View more
Labels:
- Labels:
-
Cloudera Search
04-03-2017
07:53 AM
Your comment gave me the clue, when I generated the script, I missed the statment that follows: ROW FORMAT DELIMITED, namely, -FIELDS TERMINATED BY ','. So the correct create statement would be: CREATE EXTERNAL TABLE IF NOT EXISTS ccce_apl( APL_LNK INT, UPDT_DTTM CHAR(26), UPDT_USER CHAR(8), RLS_ORDR_MOD_CD CHAR(12), RLS_ORDR_MOD_TXT VARCHAR(255) ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE location '/hdfs/data-lake/master/criminal/csv/ccce_apl'; Thanks.
... View more
03-31-2017
03:01 PM
I sqooped serveral tables from DB2 to hadoop HDFS. The data landed fine. I created hive tables using the following format(follows an example table create): -- CCCE_APL CREATE EXTERNAL TABLE IF NOT EXISTS ccce_apl_csv( APL_LNK INT, UPDT_DTTM CHAR(26), UPDT_USER CHAR(8), RLS_ORDR_MOD_CD CHAR(12), RLS_ORDR_MOD_TXT VARCHAR(255) ) ROW FORMAT DELIMITED STORED AS TEXTFILE location '/hdfs/data-lake/master/criminal/csv/ccce_apl'; The table is successfully created. Now when I excute the following query(select * from ccce_apl_csv) in Hue, I only see NULLs in all the cols: crct_cs_csv.cs_lnkcrct_cs_csv.yrcrct_cs_csv.tpcrct_cs_csv.seqcrct_cs_csv.cnty_geo_lnkcrct_cs_csv.updt_usercrct_cs_csv.updt_dttmcrct_cs_csv.num_gnrtn_cdcrct_cs_csv.inttng_doc_lnk crct_cs_csv.cs_lnk crct_cs_csv.yr crct_cs_csv.tp crct_cs_csv.seq crct_cs_csv.cnty_geo_lnk crct_cs_csv.updt_user crct_cs_csv.updt_dttm crct_cs_csv.num_gnrtn_cd crct_cs_csv.inttng_doc_lnk 1 NULL NULL NULL NULL NULL NULL NULL NULL NULL 2 NULL NULL NULL NULL NULL NULL NULL NULL NULL 3 NULL NULL NULL NULL NULL NULL NULL NULL NULL 4 NULL NULL NULL NULL NULL NULL NULL NULL NULL 5 NULL NULL NULL NULL NULL NULL NULL NULL NULL 6 NULL NULL NULL NULL NULL NULL NULL NULL NULL Any Ideas what went wrong. Again the data on HDFS looks fine, below sample data: 1,2014-06-15 12:49:50.06,ICPSRK ,RMN SM ,\N 2,2014-06-15 15:15:42.424,ICPSRK ,RMN SM ,\N 3,2014-06-16 18:26:29.515,ICPSRK ,RMN SM ,\N 4,2014-06-17 08:20:52.825,C79AJL ,\N,\N 5,2014-06-17 08:56:04.507,C79TEW ,RMN SM ,\N 6,2014-06-17 09:00:02.569,C79TEW ,RMN SM ,\N 7,2014-06-17 10:42:15.601,C79EDN ,OTHR ,\N 8,2014-06-17 11:12:56.218,C79EDN ,OTHR ,SEE FILE #13CR56266 9,2014-06-17 12:43:40.972,C79WBS ,OTHR ,OTHER-$5000 SEC BOND;NUPILLCS/ALC; 22,2014-06-19 14:42:22.799,C79AJL ,RMN SM ,\N
... View more
Labels:
- Labels:
-
Apache Hive
03-30-2017
06:51 AM
Is there an alternative way to index hdfs files to be used by Solr other than using MapReduceIndexerTool . Like a map-reduce java program. Any samples that can be shared are welcome.
... View more
03-29-2017
04:50 AM
Frankly, I am at a loss. There was some mis-match between my morphline fields and the schema, but I fixed that. There are no columns not accounted for: Here is my schema, I confirmed by getting it from the Solr web interface: <uniqueKey>id</uniqueKey>
<field name="county" type="text_general" indexed="false" stored="true"/>
<field name="year" type="int" indexed="false" stored="true"/>
<field name="court_type" type="text_general" indexed="false" stored="true"/>
<field name="seq_num" type="int" indexed="false" stored="true"/>
<field name="party_role" type="text_general" indexed="false" stored="true"/>
<field name="party_num" type="int" indexed="false" stored="true"/>
<field name="party_status" type="text_general" indexed="false" stored="true"/>
<field name="biz_name" type="text_general" indexed="true" stored="true"/>
<field name="prefix" type="text_general" indexed="false" stored="true"/>
<field name="last_name" type="text_general" indexed="true" stored="true"/>
<field name="first_name" type="text_general" indexed="true" stored="true"/>
<field name="middle_name" type="text_general" indexed="true" stored="true"/>
<field name="suffix" type="text_general" indexed="false" stored="true"/>
<field name="in_regards_to" type="string" indexed="false" stored="true"/>
<field name="case_status" type="string" indexed="false" stored="true"/>
<field name="row_of_origin" type="string" indexed="false" stored="true"/> And here is the fields as defined in readCSV.conf: columns : [id,county,year,court_type,seq_num,party_role,party_num,party_status,biz_name,prefix,last_name,first_name,middle_name,suffix,in_regards_to,case_status,row_of_origin] They are identical. Still same exception. Any other advise is appreciated.
... View more
03-28-2017
09:15 AM
The key being a string is not an issue, as there will be no searches based on the timestamp. Is there a way in the morphline to specify that the field is ineeded a string and not a timestamp? Below is the full stack trace: Error: java.io.IOException: Batch Write Failure at org.apache.solr.hadoop.BatchWriter.throwIf(BatchWriter.java:239) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:181) at org.apache.solr.hadoop.SolrRecordWriter.close(SolrRecordWriter.java:275) at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.solr.common.SolrException: ERROR: [doc=1966-05-19 10:36:59.365118] unknown field 'file_length' at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:185) at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:78) at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:238) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:940) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1095) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:701) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:99) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2135) at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54) at org.apache.solr.hadoop.BatchWriter.runUpdate(BatchWriter.java:135) at org.apache.solr.hadoop.BatchWriter$Batch.run(BatchWriter.java:90) at org.apache.solr.hadoop.BatchWriter.queueBatch(BatchWriter.java:180) ... 9 more 98871 [main] ERROR org.apache.solr.hadoop.MapReduceIndexerTool - Job failed! jobName: org.apache.solr.hadoop.MapReduceIndexerTool/MorphlineMapper, jobId: job_1489673434857_0012
... View more
- « Previous
-
- 1
- 2
- Next »