Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2001 | 06-15-2020 05:23 AM | |
| 16477 | 01-30-2020 08:04 PM | |
| 2149 | 07-07-2019 09:06 PM | |
| 8349 | 01-27-2018 10:17 PM | |
| 4739 | 12-31-2017 10:12 PM |
12-04-2017
04:41 AM
Hi @Michael Bronson you can refer to following link to How to automatically restart/recovery of HDP services via ambari How do I enable automatic restart / recovery of HDP services via Ambari? Hope this Helps
... View more
12-03-2017
02:26 PM
Wק can abort the deletion by running the following command in zookeeper: rmr /admin/delete_topics/gtom.poli.pri.procis Note that this will only prevent the deletion to proceed if it has not already started. If anything has already been deleted, it's gone
... View more
12-02-2017
07:00 PM
@Jordan - regrading what you said purge the Zookeeper records : do you mean to delete the topic by rmr /brokers/topics/hgpo.llo.prmt.processed from the Zc server
... View more
12-01-2017
06:19 PM
@Michael Bronson The steps described by you looks good. If you have Ambari running against this cluster, you should be able to find an option called "Maintenance mode" in the menus. Here is some documentation about that, https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-operations/content/setting_maintenance_mode.html It is not needed for your replace your disks, but it will avoid spurious alerts in your system.
... View more
01-30-2018
03:49 PM
@Michael Bronson , i changed the script, since , wasnt parsinf the consumer ( btw . grear script - thanks) topico="entrada" for i in `/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers 2>&1 | grep consumer | cut -d "[" -f2 | cut -d "]" -f1 | tr ',' "\n"` do /usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh sr-hadctl-xt01:2181 ls /consumers/$i/offsets 2>&1 | grep $topico if [ $? == 0 ] then echo $i fi done
... View more
11-30-2017
10:39 AM
@Michael Bronson Yes, setting the parameter to 60 minutes will cause the trash to get cleared for the deleted content after 60 minutes. Example: if we delete a file with name "/home/admin/test.txt" at 1:00 PM then with the 60 minutes trash interval that file will get cleared from the .Trash directory at 2:00 PM . But if you want immediate deletion then -skipTrash option will be best as it will bypass trash,
... View more
11-26-2017
01:08 PM
from the article - How to identify what is consuming space in HDFS * ( link https://community.hortonworks.com/articles/16846/how-to-identify-what-is-consuming-space-in-hdfs.html ) , by running the script ( from the article ) , we can see who take the most space so in our case - spark-history take the most space , and we deleted the logs/files from Ambari-GUI
... View more
11-26-2017
01:10 PM
the problem was solved , we see wrong configuration in host file /etc/hosts ( wrong host IP address ) and by edit the host file , we fixed also the DNS configuration , and this solved the problem
... View more
11-26-2017
01:14 PM
@Jay , first thanks a lot for the great support , actually we solved it by re-configure the worker IP with the previous IP , and then restart the worker host , after server go's , data-node show alive on all workers and worker is part of the cluster
... View more
11-23-2017
05:54 PM
hi Jay , I am really got loss here , what we can do next?
... View more