Member since
01-18-2016
164
Posts
32
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1401 | 04-06-2018 09:24 PM | |
1426 | 05-02-2017 10:43 PM | |
3913 | 01-24-2017 08:21 PM | |
23908 | 12-05-2016 10:35 PM | |
6576 | 11-30-2016 10:33 PM |
11-30-2016
06:53 PM
I'm glad it worked! You might want to post something on HCC about your Hive issue. However, the way to figure out what is wrong is to look carefully at the logs. Of course, you can see Some logs in Ambari from the restart, but also, go look in Ambari to figure out which sub-component is not running (or all). Then go look on that host at /var/log for the component's logs. For example, /var/log/hive/ or /var/log/hive-catalog. Do an ls -ltr on the directory and find at all of the most recent files (both ending in .log and .out). Look carefully near the bottom of those files for errors and clues. Good luck.
... View more
11-30-2016
06:04 PM
@Raf Mohammed - Click on the "Cloud" icon in the Admin console and your collection should not appear in the graph (or there will be no graph if there are no collections at all). If that's the case, then yes, you can recreate the collection now. It will create the directories it needs.
... View more
11-30-2016
03:55 PM
1 Kudo
@Raf Mohammed I recreated your error and tested it, although I was not using HDFS, which should not be any different. To delete the corrupt collection you can do these steps.
shut down solr
delete all of the shard directories for the collection. (e.g. hdfs dfs -rm -r /solr/tweets/core_node*, or just move them to another directory outside of the "dataDir" directory)
start solr issue the collection DELETE command (e.g. http://localhost:8984/solr/admin/collections?action=DELETE&name=collection2) recreate the collection I recommend shutting down Solr and HDFS gracefully to avoid index corruption. Also, you can create backup copies if you want to recover in the future. Having replicas will also probably help with this situation.
... View more
11-29-2016
10:30 PM
@Raf Mohammed - it looks like something is bad wrong with your HDFS. So about deleting the collection, I don't recall, but you should be able to look in Solr at the collections and it should be gone. Also if you try to recreate the collection it should tell you that it already exists if it was not deleted. I don't recall if the data will be deleted when the collection is deleted but I think it is.
... View more
11-29-2016
09:48 PM
@Raf Mohammed - Also, if you really don't care about the data you can also delete the collection and then recreate it again: #http://localhost:8983/admin/collections?action=DELETE&name=tweets
... View more
11-29-2016
09:44 PM
@Raf Mohammed - Assuming your index is stored in HDFS rather than the local file system where Solr is running: hdfs dfs -mkdir /solr/DELETEME_core_node1
hdfs dfs -mv /solr/tweets/core_node1/data/tlog/tlog.* /solr/DELETEME_core_node1 When you're ready to delete the files, run this command: hdfs dfs -rm -r /solr/DELETEME_core_node1
... View more
11-29-2016
08:19 PM
@Raf Mohammed - This is a transaction log file: /solr/tweets/core_node1/data/tlog/tlog.0000000000000000338, so just delete them (I always prefer to move files I'm going to delete to another directory called "TODELETE" in the event that I actually need them and once things look good I then delete them). You may need to restart Solr if you delete these files out from under Solr. A manual commit should be like this assuming your collection is named "tweets": http://localhost:8983/solr/tweets/update?commit=true
... View more
11-28-2016
06:26 PM
Note that the above is deletes older files based on file modification time, not based on the timestamp in the filename. I did use the filename with a timestamp, which probably makes the example confusing. So that command could be used with any kind of file such as keeping the last 5 copies of your backup files. Also, if you use logrotate (e.g. where log4j rolling files is not an option), you can use the maxage option, which also uses modified time. This is from the logrotate man page: maxage count
Remove rotated logs older than <count> days. The age is only checked if the logfile is to be rotated. The files are mailed to the configured address if maillast and mail are configured.
... View more
11-28-2016
06:13 PM
1 Kudo
@Avijeet Dash, The suggestion from Sunile is great. But, where you can't do that, here is a solution. If you need to manually delete all but the last X files named with a certain file pattern (*.zip, files*.log, etc), you can run something like this command which finds all but the most recent 5 matching files.
# find MY_LOG_DIR -type f -name "FILE_PATTERN" -printf "%T+\t%p\n" | sort |awk '{print $2}' |head -n -5 |xargs -i CMD_FOR_EACH_FILE {} Replace the bold parts as needed. For example, the following command will find all but the most recent 5 files matching pattern *.log.20##-##-## and deletes them. Note, since this command is a delete command, before running something so drastic, you should test first by replacing the "rm" with "ls -l" or do a "mv" instead. Test, test, test. # find /var/log/hive -type f -name "*.log.20[0-9][0-9]-[0-2][0-9]-[0-9][0-9]" -printf "%T+\t%p\n" | sort |awk '{print $2}' |head -n -5 |xargs -i rm {} There are always many ways to solve a problem and I'm sure there is a more elegant solution.
... View more
11-28-2016
05:41 PM
1 Kudo
@Bilal Arshad All of the files in ATLAS_HOME/conf/solr are probably needed in a directory on the Solr host so that when you run the solr create command it will upload those files into zookeeper for Solr to access for that collection (no matter which host Solr is running on). The files in this directory are as follows (from the link you provided) |- solr
|- currency.xml
|- lang
|- stopwords_en.txt
|- protowords.txt
|- schema.xml
|- solrconfig.xml
|- stopwords.txt
|- synonyms.txt solrconfig.xml is has configuration parameters such as what rest endpoints are available for the collection. schema.xml describes the fields and how they are handled (indexed, stored, and so forth). The other files are used by the schema.xml (assuming they are used) as they should be listed/referenced in the schema.xml. So, for this collection, I assume the following are referenced from the schema: synonyms.txt --- listing words that can be searched and considered equivalent (e.g. car and automobile) stopwords.txt -- listing highly common words that will not be indexed such as "the" and "a" protowords.xml -- listing words that should not be stemmed (broken into equivalent root words) stopwords_en.txt -- same as stopwords above but specific for English. currency.xml -- money exchange rates Even if all of these files are not used, it shouldn't hurt anything including them.
... View more