Member since
01-19-2017
75
Posts
4
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5413 | 02-25-2018 09:38 PM | |
6219 | 10-25-2017 06:38 AM | |
2749 | 09-15-2017 02:54 AM | |
3360 | 09-15-2017 02:51 AM | |
2675 | 08-01-2017 12:35 AM |
12-22-2017
08:54 AM
did you find solution on this issue? we are facing the same problem and planning to restart the cluster. will the problem get solved or the restart creates problem with the active shard. afraid of restarting the cluster. Please help me.
... View more
12-21-2017
03:42 AM
Hi,
Please help us it is very critical issue we are facing one of the collection replica are down.
When we looked into the log Trace we found below error.
WARN org.apache.solr.update.UpdateLog: Log replay finished. recoveryInfo=RecoveryInfo{adds=0 deletes=0 deleteByQuery=0 errors=1 positionOfStart=3411522} DEBUG org.apache.solr.cloud.RecoveryStrategy: eeeeee:8983_solr replayed 4910970 INFO org.apache.solr.cloud.RecoveryStrategy: Replication Recovery was successful. core=xxxxxx_collection_shard1_replica1 INFO org.apache.solr.cloud.ZkController: publishing core=xxxxx_collection_shard1_replica1 state=active collection=xxxx_collection DEBUG org.apache.zookeeper.ClientCnxn: Reading reply sessionid:0x15fda16a22c5a7d, packet:: clientPath:null serverPath:null finished:false header:: 18701,4 replyHeader:: 18701,343598899621,0 request:: '/solr/collections/xxxxx_collection/leader_initiated_recovery/shard1/core_node1,F response:: #7ba2020227374617465223a22646f776e222ca2020226372656174656442794e6f64654e616d65223a22626463656430362e73672e6762732e70726f3a383938335f736f6c72227d,s{343598889249,343598899525,1513852552137,1513855791909,45,0,0,0,73,0,343598889249} ERROR org.apache.solr.cloud.RecoveryStrategy: Could not publish as ACTIVE after succesful recovery org.apache.solr.common.SolrException: Cannot publish state of core 'xxxx_collection_shard1_replica1' as active without recovering first! at org.apache.solr.cloud.ZkController.publish(ZkController.java:1093) at org.apache.solr.cloud.ZkController.publish(ZkController.java:1056) at org.apache.solr.cloud.ZkController.publish(ZkController.java:1052) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:496) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:237) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ERROR org.apache.solr.cloud.RecoveryStrategy: Recovery failed
... View more
Labels:
- Labels:
-
Apache Solr
-
Apache Zookeeper
11-10-2017
05:54 AM
Since it is production cluster can't do this operation. last time when we start it. It took almost 20 hours the collection become normal. We have around 2.9 billion records, which size is around 8 TB of data.
... View more
11-03-2017
10:05 AM
we have 6 node cluster and solr installed on 4 machines. One of the collection size is 2TB data. Recently we upgraded cluster from 5.4.8 to 5.9.2. After the upgradation the solr collection shards are in recovery state and 2 leader shards are recovered and active. but 2 replica shards got recovery failed after one day in recovery state. How can recover the two replica shards from failed state? Thanks in advance, Ganesh
... View more
Labels:
- Labels:
-
Apache Solr
11-02-2017
04:19 AM
I am running a hive which moving data from one table to another table.
first table number of splitted files in hdfs --> 12 files.
second table number of splitted files in hdfs --> 17 files.
for second table each file have size of 870 mb
i have setted this property in the hive to hive import statement.
set mapreduce.input.fileinputformat.split.maxsize=858993459; set mapreduce.input.fileinputformat.split.minsize=858993459;
and when querying the second table it takes
51 mappers
and 211 reducers.
and occupied whole yarn resources.
I want to restrict the number of mappers and reducers for the hive query.
Please help me to solve it.
... View more
Labels:
- Labels:
-
Apache Hive
-
MapReduce
10-25-2017
06:38 AM
With the help of below link. I have solved the issue. http://blog.csdn.net/mtj66/article/details/52746066
... View more
10-25-2017
01:17 AM
Hi, My Cloudera cluster is not kerboros configured and I am trying to create grants for the user getting below error in HBase ERROR: DISABLED: Security features are not available Here is some help for this command: Grant users specific rights. Syntax : grant <user>, <permissions> [, <@namespace> [, <table> [, <column family> [, <column qualifier>]]] permissions is either zero or more letters from the set "RWXCA". READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A') Note: Groups and users are granted access in the same way, but groups are prefixed with an '@' character. In the same way, tables and namespaces are specified, but namespaces are prefixed with an '@' character. For example: hbase> grant 'bobsmith', 'RWXCA' hbase> grant '@admins', 'RWXCA' hbase> grant 'bobsmith', 'RWXCA', '@ns1' hbase> grant 'bobsmith', 'RW', 't1', 'f1', 'col1' hbase> grant 'bobsmith', 'RW', 'ns1:t1', 'f1', 'col1' Is grants works only with kerberos authenticated env? Authorization works only after the kerboros based authentication mechanism enable?
... View more
Labels:
- Labels:
-
Apache HBase
09-17-2017
10:38 PM
I like to add new field to the existing document which have 1 million records. I doesn't want to lose the exiting documents. sample exiting doucment: id : 123 field1 : 'sample1' ---------------- indexed filed2 : 'sample 2' -------------------- stored Now I want to add new field to the schema.xml which is only stored attribute and its not need to be indexed. <field name="field3" type="string" indexed="false" stored="true" required="false" multiValued="false"/> I know like this can be achived with the solrctl reload option. But my doubt is after adding new field. is it neccessary to reindex whole documents? what is reindex? My understanding is again I have to load data to solr.and I don't have backup of these data anywhere please help me on this Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Solr
09-15-2017
02:54 AM
I solved this issue by updating correct config files using the below commands. solrctl --zk ${ZOOKEEPER_CONNECT} instancedir --update collection_name ${WORKINGDIR}/collectionIndex solrctl --zk ${ZOOKEEPER_CONNECT} collection --reload collection_name
... View more
09-15-2017
02:51 AM
The error were occurred because of mistake in schema.xml file After I corrected it works fine.
... View more