Member since
01-16-2018
541
Posts
33
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
113 | 01-18-2023 12:10 AM | |
75 | 01-16-2023 01:54 AM | |
171 | 01-13-2023 01:59 AM | |
163 | 01-13-2023 01:35 AM | |
99 | 01-02-2023 10:03 PM |
11-10-2021
09:48 PM
Hello @vishal6196 Thanks for using Cloudera Community. Based on the Post, the Phoenix Delete Command is reporting more records being deleted than the Select Count(*) returns. Thank You for sharing the HDP & Phoenix Version as well. Few Queries for your Team: Whether the Observation is made for all Tables or Selective Table(s), Explain Output of the Select & Delete SQL, Any Index of the Phoenix Table being deleted, If Performing MajorCompaction on the Table before Deleting (Just for Sanity Check) shows any Difference with respect to the Rows Deleted via Delete SQL. Regards, Smarak
... View more
11-10-2021
09:38 PM
Hello @RafaelDiaz Thanks for using Cloudera Community. Based on the Post, Create Table via Hue works yet fails in R with "dbExecute" Function. While I haven't tried using "dbExecute" Function in R, the Error shows HiveAccessControlException denying the User on "default/iris". You haven't stated the CDH/CDP Version being used yet I assume your Team is using CDP with Ranger. First, Kindly check if passing the DB Name "cld_ml_bi_eng" with Table Name "iris" in "dbExecute" Function works. Since I haven't used "dbExecute" Function in R, Let us know if passing DB Name with Table Name like <DBName.TableName> is feasible. Secondly, Kindly check in Ranger for the Privilege of User "RDIAZ" for "CREATE" Privilege. Try including the DB Scope to "*" & Confirm the Table being Created with the Database ("default" or "cld_ml_bi_eng"). Accordingly, We can proceed further. Kindly review & share the Outcome from the above 2 suggestion. If your Team have Fixed the Issue already, We would appreciate your Team sharing the details for fellow Community Users as well. Regards, Smarak
... View more
11-01-2021
12:43 PM
Hello @SteffenMangold Kindly let us know if the Post was Helpful for your Team. If Yes, Please mark the Post as Solved. If No, Do share the Outcome of the Command shared in our Post. Accordingly, We shall review & get back to you. Regards, Smarak
... View more
11-01-2021
12:40 PM
Hello @Ma_FF Kindly let us know if the Post was Helpful to Identify the RegionServer Thread spiking the CPU Usage of the RegionServer JVM. If Yes, Please mark the Post as Solved. Regards, Smarak
... View more
11-01-2021
12:39 PM
Hello @paresh Thanks for using Cloudera Community. Based on the Post, HBase Service is being impacted owing to the Trace shared by you in the Post with the Solutions attempted. While Complete Logs helps, It appears the Region " 1595e783b53d99cd5eef43b6debb2682" Replay via "recovered.wals" is being interrupted owing to EOFException. When a Region is being Opened, any Contents in the "recovered.wals" are Replayed by reading them & pushing to MemStore. Once the Edits are persisted from MemStore to an Hfile, the "recovered.wals" are Removed. With the Possibility of DataLoss, You may attempt to Stop HBase > Sideline the RecoveredEdits "hdfs://ha-cluster:8020/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals" & MasterProcWALs (To avoid any Replay of associated PID) > Start HBase > Verify the Status. Note the "DataLoss" is likely from removing the RecoveredEdits File not persisted yet. Additionally, You may use WALPlayer [1] for replaying the Contents of RecoveredEdits as well. Regards, Smarak [1] https://hbase.apache.org/book.html#walplayer
... View more
10-23-2021
08:14 AM
Hello @amit_hadoop Thanks for using the Cloudera Community. It appears you are using [1] Post for asking the same Question as well. We have replied to you on the concerned Post. To avoid Duplication, We shall mark the Post as Closed. Regards, Smarak [1] https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-hbase-ipc-ServerNotRunningYetException/m-p/328376#M230250
... View more
10-23-2021
08:09 AM
Hello @Malvin We are marking the Post as Solved as Region Server Grouping is the Only Way to Segregate the Region Assignment at Region Server Level. If your Team encounter any issues, Feel free to Update the Post & we shall get back to your Team accordingly. Thank You again for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
08:04 AM
Hello @rahuledavalath We are marking the Post as Solved. If your Team have any further concerns, Please update the Post, if your Team face any issues with either of the Recommendation shared [Migrate the System Tables Or Create a Phoenix Table afresh on Top of HBase Table]. Thank You for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:54 AM
Hello @Rjkoop As stated by @willx, Visibility Labels aren't supported & Will has shared the Link. As such, We are marking the Post as Solved. Having said that, Your Team may post any further concerns in the Post. We shall review & get back to your Team accordingly. Thanks for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:50 AM
1 Kudo
Hello @Ma_FF Thanks for using Cloudera Community. Based on the Post, 1 Region Server is using High CPU. As requested by @PrathapKumar, Review the same. Additionally, Your Team can perform the below: (I) When the Region Server JVM reports High CPU, Open "top" Command for the Region Server PID, (II) Use "Shift H" to open the Thread View of the PID. This would show the Threads within the Region Server JVM with CPU Usage, (III) Monitor the Thread View & Identify the Thread hitting the Max CPU Usage, (IV) Take Thread Dump | JStack of Region Server PID & Compare the Thread with the "top" Thread View consuming the Highest CPU. The above Process would allow you to identify the Thread contributing towards the CPU Usage. Compare the same with other Region Server & your Team can make a Conclusive Call to identify the reasoning for CPU Utilization. Howsoever Logs are reviewed, Narrowing the Focus of JVM review would assist in identifying the Cause. Review shared Link for additional reference. Kindly review & share your Observation in the Post. Regards, Smarak [1] https://www.infoworld.com/article/3336222/java-challengers-6-thread-behavior-in-the-jvm.html [2] https://blogs.manageengine.com/application-performance-2/appmanager/2011/02/09/identify-java-code-consuming-high-cpu-in-linux-linking-jvm-thread-and-linux-pid.html [3] https://blog.jamesdbloom.com/JVMInternals.html
... View more
10-21-2021
10:32 AM
Hello @Sam7 We hope the SQL shared by @BennyZ helped your Team meet the Use-Case. Showing the SQL Output from the SQL shared in a Sample Table: As such, We are marking the Post as Solved. Regards, Smarak
... View more
10-21-2021
06:18 AM
1 Kudo
Hello @Sayed016 Thanks for the response. In short, your Team identified GC Pauses during the concerned period wherein Solr Service was losing Connectivity with ZooKeeper. Your Team increased the zkClientTimeout to 30 Seconds & now, the Leader is elected for the Collection. In short, the Solr Service was impacted owing to GC Pauses of Solr JVM & the same has been addressed by your Team. Thank You for sharing your experience on the Community, which would help our fellow Community Users as well. Let us know if we are good to close the Post as Solved as well. Regards, Smarak
... View more
10-21-2021
12:22 AM
1 Kudo
Hello @Malvin Thanks for using Cloudera Community. Based on the Post, You wish to have RegionServers to hold different number of Regions depending on the Hardware Spec of the Host. Note that RegionServer JVM runs with a Heap allocation, which is expected to be Uniform across the RegionServer(s). You may consider using RegionServer Grouping [1] to ensure Regions from Tables with Large Region Count are mapped to few RegionServers. The Link offers Implementation & Use-Cases details as well. Regards, Smarak [1] https://hbase.apache.org/book.html#rsgroup
... View more
10-21-2021
12:16 AM
1 Kudo
Hello @amit_hadoop Thanks for using Cloudera Community. Based on the Post, You installed HDP Sandbox 3.0 & observed Timeline Service v2 Reader reporting [1]. If you observe the Port mentioned, It says "17020" & HBase RegionServer runs with "16020" Port by default. YARN Timeline Service uses an Embedded HBase Service, which isn't equivalent to HBase Service as we may see in the Ambari UI. Kindly let us know the Outcome of the below Steps: Whether Restarting YARN Timeline Service (Which restarts the Embedded HBase Service) helps, Within the YARN Logs, We should have an Embedded HBase Logs for HMaster & RegionServer. Reviewing the Logs may offer additional clue into the reasoning for Embedded HBase concerns, Review [2] for similar concerns wherein the ZNode was Cleared & Service Restarted, Review [3] for Documentation on Deleting the ZNode & Data. Restarting the YARN Timeline Service should initialise the HBase Setup afresh. Regards, Smarak [1] Error Trace: 2021-09-29 05:47:58,587 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=5385 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sandbox-hdp.hortonworks.com,17020,1632894462010 is not running yet
details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdp.hortonworks.com,17020,1628801419880, seqNum=-1 [2] https://community.cloudera.com/t5/Support-Questions/Yarn-Timeline-Service-V2-not-starting/td-p/281368 [3] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-operating-system/content/remove_ats_hbase_before_switching_between_clusters.html
... View more
10-21-2021
12:01 AM
1 Kudo
Hello @rahuledavalath Thanks for using Cloudera Community. As suggested by @9een, Kindly use the [1] created by @willx for migrating the Tables from HDP to CDP. In short, Your Team migrated the Table yet Phoenix relies on "System" Tables as HBase Tables relies on "HBase:Meta". As such, Your Team needs to either use [1] for Exporting Phoenix "System" Tables Or, follow [2] wherein your Team create a Phoenix Table on top of the Exported HBase Table. Either way, We are ensuring Phoenix "System" Tables are reflecting the Table's Metadata as well. Regards, Smarak [1] https://community.cloudera.com/t5/Community-Articles/Phoenix-tables-migration-from-HDP-to-CDP/ta-p/323933 [2] https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
... View more
10-20-2021
11:50 PM
Hello @SteffenMangold Thanks for using Cloudera Community. Based on the Post, Your Team wish to assign a Region (Which isn't assigned to any RegionServer) without any HBase Restart. We wish to confirm whether your Team tried using the following approaches: HBCK2 Assign as documented via [1]. The Command requires the Region ID as argument & returns a PID, which can be viewed via HMaster UI "Locks & Procedures" or "list_procedures" via HBase Shell for Success/Failure. Within HBase Shell, There are "assigns" Or "move" Command allowing Destination Region Server to be specified as well. Your Team may have tried the above Command yet the Outcome of the above Command would confirm if there is any Underlying concerns with the Region, which requires additional intervention like HBase Service Restart (Or, HMaster Restart which should spawn an Assignment Manager Thread afresh). Regards, Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2#running-hbck2
... View more
10-20-2021
11:39 PM
1 Kudo
Hello @Sayed016 Thanks for using Cloudera Community. Based on the Post, Your Team is experiencing Exception [1] for RangerAudits Shard1 Replica1, followed by Successful Connect with ZooKeeper to eventually failing with [2]. Once the same happened, What is being observed by your Team i.e. Whether the Collection RangerAudits Shard1 Replica1 enters "Down" State as opposed to "Active" State. Since the Logs shows RangerAudits Shard1 has no Replica, It's feasible the Issue arises from Consistency concerns between Solr & ZooKeeper. There are few things we wish to verify with your assistance: When Solr reports [1], Whether the ZooKeeper Quorum are Healthy Or, any issues with the ZooKeeper Server wherein the Solr ZooKeeperClient is connected, What happens after [2] for RangerAudits Shard1 Replica1. This is requested above as well, Whether Restarting Solr Service on Host wherein "ranger_audits_shard1_replica_n1" Core is hosted helps in mitigating the "ClusterState Says We Are Leader, But Locally We Don't Think So". The HDP, CDH, CDP Version being discussed in the Post. Regards, Smarak [1] 2021-10-17 04:05:57.006 ERROR (qtp1916575798-2477) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - Updates are disabled. [2] 2021-10-17 04:14:37.504 ERROR (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ClusterState says we are the leader (https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1), but locally we don't think so. Request came from null
... View more
08-06-2021
08:05 PM
Hello @JB0000000000001 As we haven't heard from your side, We assume the Queries posted by you has been addressed & marking the Post as Solved. When you have the time, Feel free to share your Observation with respect to studying or implementing HBase on Cloud Storage. Thanks again for sharing your thoughts on Cloudera Community. - Smarak
... View more
07-28-2021
03:42 AM
Follow-Update to tag @KR_IQ @sppandita85BLR as the Original Post is Old.
... View more
07-28-2021
03:41 AM
Hello @a_gulshani Thanks for using Cloudera Community. Based on the Post, the RangerAudits Collection has issues, which is causing the "Error running solr query, please check solr configs. Could not find a healthy node to handle the request" Message. The Date Exception shared by you refers to "audit_logs_shard0_replica1" , which isn't related to RangerAudits Collection. The Ranger Audit UI relies on RangerAudits Collection & if the Shards of RangerAudits Collection aren't available, We get the "Error running solr query, please check solr configs. Could not find a healthy node to handle the request" Message. Open the Infra-Solr UI & Verify the State of the RangerAudits Collection by Solr UI > Cloud > Graph. If the Shards associated with RangerAudits Collection aren't Active, the Error is expected. Next, Review the Infra-Solr Service Logs & confirm the reasoning for the Shard unavailability. There can be multiple reasoning for Shard unavailability, hence Logs would be the best place to review. For a Quicker Solution & if you are willing to lose the RangerAudits, You can delete the RangerAudits Collection & Restart the RangerAdmin Service to ensure the Collection is created afresh. Kindly review & let us know your observation. - Smarak
... View more
07-26-2021
01:15 AM
Hello @michalm_ Thanks for using Cloudera Community. While I haven't performed such Task, I wish to check if you have reviewed the 3rd Party Script via [1], which uses a Use-Defined Time to find Impala Queries & optionally, Kill them as well. Let us know if it helps. - Smarak [1] https://github.com/onefoursix/kill-long-running-impala-queries
... View more
07-26-2021
01:07 AM
Hello @Joe685 Thanks for using Cloudera Community. Your Question is with respect to using Filter for visualizing the Datapoints. You ask is Clear, yet we wish to confirm the Platform. You mentioned "Public". Does that mean you are reviewing Data Visualization on CDP Public Cloud. If Yes, We wish to check if [1] fits your requirement. If Not, any additional details into the Platform as requested earlier can assist us in reviewing internally & getting back to you accordingly. - Smarak [1] https://docs.cloudera.com/data-visualization/cloud/filter-shelf/topics/viz-filter-shelf-range-date.html
... View more
07-23-2021
04:20 AM
Hello @mahfooz-iiitian Please note that CDH Releases are End-Of-Support & CDP Releases aren't supporting Spark-HBase-Connector in favor of HBase-Spark-Connector as documented below. In short, Hortonworks Spark-HBase Connector isn't Supported yet in favor of HBase Connector for Spark. - Smarak [1] https://issues.apache.org/jira/browse/HBASE-25326 [2] https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_HBase_Connector.md [3] https://docs.cloudera.com/cdp-private-cloud/latest/data-migration/topics/cdp-data-migration-hbase-prepare-data-migration.html
... View more
07-23-2021
03:58 AM
Hello @Chandresh Hope you are doing well. We wish to confirm if you have identified the Cause of the issue. If Yes, Kindly share the same to benefit our fellow Community Users as well. If no further assistance required, Please mark the Post as Solved. - Smarak
... View more
07-23-2021
03:41 AM
Hello @ryu As mentioned by @arunek95, we assume Phoenix is enabled for the Cluster. If not, Kindly enable Phoenix & try the Command again. The Logging indicates HDP v2.6.1.0 with Phoenix v4.7. The Directory " /usr/lib/phoenix/" has the Phoenix Client & you mentioned the same Directory has Phoenix Server Jar as well. Kindly verify if the Permission on the JAR is Correct & confirm via "jar -tvf" on the Phoenix Server Jar that the Class "MetaDataEndpointImpl" is included in the same. The Error indicates the Phoenix creating the SYSTEM Tables (Upon 1st Connection to Phoenix) is encountering the Error. In our Internal Setup, We see the Phoenix-Server Jar is present in HBase Lib Path as well, pointing to the Phoenix-Server Jar in Phoenix Lib Path as SymLink: /usr/hdp/<Version>/hbase/lib/phoenix-server.jar -> /usr/hdp/<Version>/phoenix/phoenix-server.jar Kindly ensure the Phoenix Server JAR is present in HBase Lib Directory as well. Additionally, Review the Master Logs to check for the Error Message at HBase Level as well. - Smarak
... View more
07-23-2021
03:08 AM
Hello @gurucgi This is an Old Post yet we wish to check if the issue has been addressed by you. If Yes, Please share the Steps to ensure fellow Community Users can benefit from your experience. Based on the Post, SystemCatalogRegionObserver is being reported for Class not being loaded. The concerned Error is being received for Phoenix v5.1.1. In Cloudera, We are shipping with Phoenix v5.1.0 at the time of writing. I am not sure if you are being impacted via [1]. In short, the JAR seems to be requiring the Phoenix-Server Jar to be placed rightfully. - Smarak [1] https://issues.apache.org/jira/browse/PHOENIX-6330
... View more
07-23-2021
02:39 AM
Hello @Satya_Singh Do let us know if your issue has been resolved. If Yes, Please share the Mitigation Steps followed by you to ensure other Community Users can benefit from your experience & mark the Post as Resolved as well. - Smarak
... View more
07-23-2021
02:38 AM
Hello @krishpuvi Thanks for using Cloudera Community. Based on the Post, You wish to run HBase Major Compaction on a particular queue to ensure the Compaction activity doesn't impact other resources in the Cluster. Please note that HBase Major Compaction isn't using any YARN managed resources & as such, We can't map Major Compaction to be run on any YARN Queue. You can restict MajorCompaction to use lesser bandwidth via [1] & [2], which explains the " PressureAwareCompactionThroughputController" with Min & Max Speed along with the Parameter to disable any Speed Limit via "NoLimitThroughputController". Additionally, You can disable Compaction & run Compaction manually during Off-Business-Hours at Table/ColumnFamiliy Level as well. - Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/configuring-hbase/topics/hbase-limit-the-speed-of-compactions.html [2] https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.html
... View more
07-23-2021
02:31 AM
Hello @JB0000000000001 We wish to follow-up with you on the Post & confirm if you have any additional Observation to be shared with respect to studying or implementing HBase on Cloud Storage. Or, our response to your Post was helpful in getting the required Cloud Storage possible latencies. - Smarak
... View more
07-23-2021
02:27 AM
Hello @proble As we haven't received any response from your side, We shall be marking the Post as Resolved. Having said that, Please do share your experience with handling the concerned issue & whether the details shared by us assisted you. In short, the HBase Table Creation is failing as Master hasn't completed Initialisation. You have to use HBCK2 Tool to assign the HBase:Meta & HBase:Namespace (Whichever isn't assigned as per Master Logs). Link [1] covers our response to the Post on 2021/05/31 with the Log trace to be verified in Master Logs & the Steps to be followed. We hope the Post was helpful to you & assisted in resolving the issue. - Smarak [1] https://community.cloudera.com/t5/Support-Questions/Error-while-creating-table-in-hbase/m-p/317429/highlight/true#M227161
... View more