Member since
01-19-2017
75
Posts
4
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5410 | 02-25-2018 09:38 PM | |
6205 | 10-25-2017 06:38 AM | |
2749 | 09-15-2017 02:54 AM | |
3360 | 09-15-2017 02:51 AM | |
2668 | 08-01-2017 12:35 AM |
03-07-2022
01:37 AM
Hello @ganeshkumarj Thanks for using Cloudera Community. Based on the Post, You are migrating from Cloudera Search (CDH 5.9.3) to Standalone Solr (Apache v4.10.3). As your Team mentioned, the Error points to Index (Copied manually) being on a Lucene Version higher than anticipated [1]. Your Team can confirm the LuceneVersion via "solrconfig.xml" for the Collection "sample_collection" on CDH. If LuceneVersion Match isn't feasible, ReIndexing is the Only Way forward. Yet, there are few things wherein our help in this Post would be limited: (I) CDH v5.9.3 is EoS since a long time. Internally, We have extremely limited Setup for checking further on your Team's concerns. (II) Your Team is implementing the Migration on Standalone Solr (Apache v4.10.3). Cloudera Product Offering package Solr into Search (In CDH) & Solr (In CDP). Unfortunately, We have limited input on any Open Source Implementation outside of Cloudera Product. Our Team would be happy to assist your Team to Migrate from CDH v5.9.3 to CDP, if required by your Team. We have Documentation (Which are Tested internally) to migrate from CDH Search to CDP Solr & your Team would get the Support assistance in any issues as well. Regards, Smarak [1] https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/index/IndexFormatTooNewException.html
... View more
01-21-2019
10:29 PM
Problem is due to python version you have in your node. incompatibility between the Python 3 version and the Python 2 version. The default Solr commands use the python2 version, so here we need to remove the Python global environment variables, not the python3 global environment variables. Thanks & Regards, J.Ganesh Kumar.
... View more
11-18-2018
04:48 PM
Have you found any way to run YARN container as the user who'is launched it? I also have set these two and have all the nodes sync with LDAP, still runs as nobody despite the fact I can see it says the yarn user request is ...
... View more
10-13-2018
03:54 AM
1 Kudo
Hi Harsh, Thanks alot for your support. Really appreciate. I was able to make hbase stable by adding the line mentioned by you but the only one change was reuiqred. -Dzookeeper.skipACL=yes we need to give "yes" not true. It worked for me. Thanks for making my cluster happpy. Regards Ayush
... View more
09-19-2018
10:32 PM
Hello sparamas, Below are the steps are must follow when submitting the oozie job from oozie server. Kinit with principle and keytab [ kinit user_principle -k -t key_tab ] must use fqdn along with oozie server name in command oozie job -oozie http://machine_name@domain:11000/oozie -config xxxx -run
... View more
03-16-2018
10:52 PM
The command is only for non-Cloudera-Manager deployments like the documentation notes: """ In non-managed deployments, you can start a Lily HBase Indexer Daemon manually on the local host with the following command: sudo service hbase-solr-indexer restart """ If you use Cloudera Manager then just add a new Service from the Clusters page of the type "Key-Value Store Indexer" shown in the new service list. Then proceed with configuring it from CM and starting it.
... View more
02-25-2018
09:38 PM
Hello pdvorak Thanks for your reply. We have solved this issue by uploading the collections config files back to the zookeeper using update and reload option in solrcloud. After execting those commands we done solr restart. while restarting the solr elects the collection leaders and placed those informations under /solr/collection in zookeeper. Just for my understand could you please answer for the below queries. what do you mean by "zk data dir" . Is it creates the snapshot for the config files in zookeeper. As I know zookeeper holds only the config files instead of data. then what "zk data dir" holds?
... View more
01-16-2018
01:33 AM
Hi, It's been a while ! If I remember correctly, we did not find any solution back then (with CDH5.3.0) - at least other than recreating the collection and re-indexing the data. But after upgrading the CDH version using a version of Solr supporting the "ADDREPLICA" and "DELETEREPLICA" functions in the API you can add an other replica and then delete the one which is down. regards, mathieu
... View more
12-22-2017
08:59 AM
Planning to restart the cluster to make a down replica to become active. is that right approach. anyone can help me on this issue.
... View more
11-10-2017
05:54 AM
Since it is production cluster can't do this operation. last time when we start it. It took almost 20 hours the collection become normal. We have around 2.9 billion records, which size is around 8 TB of data.
... View more