Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 785 | 04-08-2025 06:48 AM | |
| 966 | 04-01-2025 07:20 AM | |
| 917 | 04-01-2025 07:15 AM | |
| 966 | 05-06-2024 06:09 AM | |
| 1506 | 05-06-2024 06:00 AM |
11-13-2021
06:51 AM
Hello @xgxshtc We observed you have posted the concerned ask in a New Post [1] as the concerned Post is ~4Years Old. While the Current Post is Unresolved, We shall wait on your Team's review on [1] before confirming the Solution on the Current Post as well. Regards, Smarak [1] https://community.cloudera.com/t5/Support-Questions/Hbase-regionserver-shutdown-after-few-hours/m-p/330070/highlight/false#M230589
... View more
11-13-2021
06:46 AM
Hello @xgxshtc Thanks for using Cloudera Community. Based on the Post, RegionServer is reporting EOFException | "Bad DataNode" while replaying WALs "/hbase/WALs". It appears the HDFS Blocks are having issues. To fix the Issue, Your Team can Sideline the Contents of the "/hbase/WALs" (Specific "/hbase/WALs/jfhbase03,60020,1636456250380") & restart the concerned RegionServer. If all RegionServers are Impacted, Sideline each of the RegionServer Directories from "/hbase/WALs". Note that the WALs hold the Edits not yet persisted to Disk & Sidelining the WAL Directories (1 WAL per RegionServer) may incur Data Loss. Additionally, Review the HDFS FSCK of HDFS files with WALs "/hbase/WALs" & fix any Corrupt/Missing Blocks. After ensuring the HDFS FSCK of WALs "/hbase/WALs" is Healthy, Your Team can restart the HBase RegionServers. Regards, Smarak
... View more
11-11-2021
09:18 AM
Hello @Faizan_Ali Thanks for using Cloudera Community. Based on the Post, ZooKeeper Service is failing to start on CDH v5.16. The Screenshot covers the "stdout" file. Kindly share the "stderr" file or the "Role Log" file for reviewing the issue with ZooKeeper Service. If your Team have Solved the Post, Kindly share the Issue & Steps performed to Mitigate the Issue. Your assistance would help our fellow Community Users. Regards, Smarak
... View more
11-11-2021
09:14 AM
Hello @drgenious Thanks for using Cloudera Community. We hope the response by @balajip was helpful for your ask. Additionally, We wish to share a few details: Your Question points to "How To Make Query Faster". Ideally, Impala would use Parallelism for executing a Query in fragments across Executors. As such, the 1st review should be using Impala Query Profile of the SQL to identify the Time taken at each Phase of SQL Execution. Refer [1] & [2] for few Links around Impala Query Profile. Once the Phase taking the Most Time is identified, Fine-Tune accordingly. Simply increasing the Impala Executors Daemon or using a Dedicated Coordinator may not be helpful, unless the SQL's Slow Fragment(s) are identified. Kindly review & let us know if you have any further ask in the Post. Regards, Smarak [1] https://cloudera.ericlin.me/2018/09/impala-query-profile-explained-part-1/ [2] https://docs.cloudera.com/runtime/7.2.10/impala-reference/topics/impala-profile.html
... View more
11-10-2021
09:53 PM
Hello @yacine_ Thanks for sharing the Solution along with the Root Cause as well. We shall mark the Post as Solved for now to ensure fellow Community Users can use the Solution as well. Regards, Smarak
... View more
11-10-2021
09:48 PM
Hello @vishal6196 Thanks for using Cloudera Community. Based on the Post, the Phoenix Delete Command is reporting more records being deleted than the Select Count(*) returns. Thank You for sharing the HDP & Phoenix Version as well. Few Queries for your Team: Whether the Observation is made for all Tables or Selective Table(s), Explain Output of the Select & Delete SQL, Any Index of the Phoenix Table being deleted, If Performing MajorCompaction on the Table before Deleting (Just for Sanity Check) shows any Difference with respect to the Rows Deleted via Delete SQL. Regards, Smarak
... View more
11-10-2021
09:38 PM
Hello @RafaelDiaz Thanks for using Cloudera Community. Based on the Post, Create Table via Hue works yet fails in R with "dbExecute" Function. While I haven't tried using "dbExecute" Function in R, the Error shows HiveAccessControlException denying the User on "default/iris". You haven't stated the CDH/CDP Version being used yet I assume your Team is using CDP with Ranger. First, Kindly check if passing the DB Name "cld_ml_bi_eng" with Table Name "iris" in "dbExecute" Function works. Since I haven't used "dbExecute" Function in R, Let us know if passing DB Name with Table Name like <DBName.TableName> is feasible. Secondly, Kindly check in Ranger for the Privilege of User "RDIAZ" for "CREATE" Privilege. Try including the DB Scope to "*" & Confirm the Table being Created with the Database ("default" or "cld_ml_bi_eng"). Accordingly, We can proceed further. Kindly review & share the Outcome from the above 2 suggestion. If your Team have Fixed the Issue already, We would appreciate your Team sharing the details for fellow Community Users as well. Regards, Smarak
... View more
11-01-2021
12:43 PM
Hello @SteffenMangold Kindly let us know if the Post was Helpful for your Team. If Yes, Please mark the Post as Solved. If No, Do share the Outcome of the Command shared in our Post. Accordingly, We shall review & get back to you. Regards, Smarak
... View more
11-01-2021
12:40 PM
Hello @Ma_FF Kindly let us know if the Post was Helpful to Identify the RegionServer Thread spiking the CPU Usage of the RegionServer JVM. If Yes, Please mark the Post as Solved. Regards, Smarak
... View more
11-01-2021
12:39 PM
Hello @paresh Thanks for using Cloudera Community. Based on the Post, HBase Service is being impacted owing to the Trace shared by you in the Post with the Solutions attempted. While Complete Logs helps, It appears the Region "1595e783b53d99cd5eef43b6debb2682" Replay via "recovered.wals" is being interrupted owing to EOFException. When a Region is being Opened, any Contents in the "recovered.wals" are Replayed by reading them & pushing to MemStore. Once the Edits are persisted from MemStore to an Hfile, the "recovered.wals" are Removed. With the Possibility of DataLoss, You may attempt to Stop HBase > Sideline the RecoveredEdits "hdfs://ha-cluster:8020/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals" & MasterProcWALs (To avoid any Replay of associated PID) > Start HBase > Verify the Status. Note the "DataLoss" is likely from removing the RecoveredEdits File not persisted yet. Additionally, You may use WALPlayer [1] for replaying the Contents of RecoveredEdits as well. Regards, Smarak [1] https://hbase.apache.org/book.html#walplayer
... View more