Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 775 | 04-08-2025 06:48 AM | |
| 951 | 04-01-2025 07:20 AM | |
| 913 | 04-01-2025 07:15 AM | |
| 961 | 05-06-2024 06:09 AM | |
| 1500 | 05-06-2024 06:00 AM |
11-01-2021
12:43 PM
Hello @SteffenMangold Kindly let us know if the Post was Helpful for your Team. If Yes, Please mark the Post as Solved. If No, Do share the Outcome of the Command shared in our Post. Accordingly, We shall review & get back to you. Regards, Smarak
... View more
11-01-2021
12:40 PM
Hello @Ma_FF Kindly let us know if the Post was Helpful to Identify the RegionServer Thread spiking the CPU Usage of the RegionServer JVM. If Yes, Please mark the Post as Solved. Regards, Smarak
... View more
11-01-2021
12:39 PM
Hello @paresh Thanks for using Cloudera Community. Based on the Post, HBase Service is being impacted owing to the Trace shared by you in the Post with the Solutions attempted. While Complete Logs helps, It appears the Region "1595e783b53d99cd5eef43b6debb2682" Replay via "recovered.wals" is being interrupted owing to EOFException. When a Region is being Opened, any Contents in the "recovered.wals" are Replayed by reading them & pushing to MemStore. Once the Edits are persisted from MemStore to an Hfile, the "recovered.wals" are Removed. With the Possibility of DataLoss, You may attempt to Stop HBase > Sideline the RecoveredEdits "hdfs://ha-cluster:8020/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals" & MasterProcWALs (To avoid any Replay of associated PID) > Start HBase > Verify the Status. Note the "DataLoss" is likely from removing the RecoveredEdits File not persisted yet. Additionally, You may use WALPlayer [1] for replaying the Contents of RecoveredEdits as well. Regards, Smarak [1] https://hbase.apache.org/book.html#walplayer
... View more
10-23-2021
08:14 AM
Hello @amit_hadoop Thanks for using the Cloudera Community. It appears you are using [1] Post for asking the same Question as well. We have replied to you on the concerned Post. To avoid Duplication, We shall mark the Post as Closed. Regards, Smarak [1] https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-hbase-ipc-ServerNotRunningYetException/m-p/328376#M230250
... View more
10-23-2021
08:09 AM
Hello @Malvin We are marking the Post as Solved as Region Server Grouping is the Only Way to Segregate the Region Assignment at Region Server Level. If your Team encounter any issues, Feel free to Update the Post & we shall get back to your Team accordingly. Thank You again for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:54 AM
Hello @Rjkoop As stated by @willx, Visibility Labels aren't supported & Will has shared the Link. As such, We are marking the Post as Solved. Having said that, Your Team may post any further concerns in the Post. We shall review & get back to your Team accordingly. Thanks for using Cloudera Community. Regards, Smarak
... View more
10-21-2021
06:18 AM
1 Kudo
Hello @Sayed016 Thanks for the response. In short, your Team identified GC Pauses during the concerned period wherein Solr Service was losing Connectivity with ZooKeeper. Your Team increased the zkClientTimeout to 30 Seconds & now, the Leader is elected for the Collection. In short, the Solr Service was impacted owing to GC Pauses of Solr JVM & the same has been addressed by your Team. Thank You for sharing your experience on the Community, which would help our fellow Community Users as well. Let us know if we are good to close the Post as Solved as well. Regards, Smarak
... View more
10-21-2021
12:16 AM
1 Kudo
Hello @amit_hadoop Thanks for using Cloudera Community. Based on the Post, You installed HDP Sandbox 3.0 & observed Timeline Service v2 Reader reporting [1]. If you observe the Port mentioned, It says "17020" & HBase RegionServer runs with "16020" Port by default. YARN Timeline Service uses an Embedded HBase Service, which isn't equivalent to HBase Service as we may see in the Ambari UI. Kindly let us know the Outcome of the below Steps: Whether Restarting YARN Timeline Service (Which restarts the Embedded HBase Service) helps, Within the YARN Logs, We should have an Embedded HBase Logs for HMaster & RegionServer. Reviewing the Logs may offer additional clue into the reasoning for Embedded HBase concerns, Review [2] for similar concerns wherein the ZNode was Cleared & Service Restarted, Review [3] for Documentation on Deleting the ZNode & Data. Restarting the YARN Timeline Service should initialise the HBase Setup afresh. Regards, Smarak [1] Error Trace: 2021-09-29 05:47:58,587 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=5385 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sandbox-hdp.hortonworks.com,17020,1632894462010 is not running yet
details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdp.hortonworks.com,17020,1628801419880, seqNum=-1 [2] https://community.cloudera.com/t5/Support-Questions/Yarn-Timeline-Service-V2-not-starting/td-p/281368 [3] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-operating-system/content/remove_ats_hbase_before_switching_between_clusters.html
... View more
09-08-2021
02:43 PM
This whole series is really insightful and helpful!
... View more