Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1443 | 04-08-2025 06:48 AM | |
| 1714 | 04-01-2025 07:20 AM | |
| 1714 | 04-01-2025 07:15 AM | |
| 1358 | 05-06-2024 06:09 AM | |
| 2083 | 05-06-2024 06:00 AM |
10-23-2021
08:04 AM
Hello @rahuledavalath We are marking the Post as Solved. If your Team have any further concerns, Please update the Post, if your Team face any issues with either of the Recommendation shared [Migrate the System Tables Or Create a Phoenix Table afresh on Top of HBase Table]. Thank You for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:54 AM
Hello @Rjkoop As stated by @willx, Visibility Labels aren't supported & Will has shared the Link. As such, We are marking the Post as Solved. Having said that, Your Team may post any further concerns in the Post. We shall review & get back to your Team accordingly. Thanks for using Cloudera Community. Regards, Smarak
... View more
10-21-2021
10:32 AM
Hello @Sam7 We hope the SQL shared by @BennyZ helped your Team meet the Use-Case. Showing the SQL Output from the SQL shared in a Sample Table: As such, We are marking the Post as Solved. Regards, Smarak
... View more
10-21-2021
06:18 AM
1 Kudo
Hello @Sayed016 Thanks for the response. In short, your Team identified GC Pauses during the concerned period wherein Solr Service was losing Connectivity with ZooKeeper. Your Team increased the zkClientTimeout to 30 Seconds & now, the Leader is elected for the Collection. In short, the Solr Service was impacted owing to GC Pauses of Solr JVM & the same has been addressed by your Team. Thank You for sharing your experience on the Community, which would help our fellow Community Users as well. Let us know if we are good to close the Post as Solved as well. Regards, Smarak
... View more
10-21-2021
12:22 AM
1 Kudo
Hello @Malvin Thanks for using Cloudera Community. Based on the Post, You wish to have RegionServers to hold different number of Regions depending on the Hardware Spec of the Host. Note that RegionServer JVM runs with a Heap allocation, which is expected to be Uniform across the RegionServer(s). You may consider using RegionServer Grouping [1] to ensure Regions from Tables with Large Region Count are mapped to few RegionServers. The Link offers Implementation & Use-Cases details as well. Regards, Smarak [1] https://hbase.apache.org/book.html#rsgroup
... View more
10-21-2021
12:16 AM
1 Kudo
Hello @amit_hadoop Thanks for using Cloudera Community. Based on the Post, You installed HDP Sandbox 3.0 & observed Timeline Service v2 Reader reporting [1]. If you observe the Port mentioned, It says "17020" & HBase RegionServer runs with "16020" Port by default. YARN Timeline Service uses an Embedded HBase Service, which isn't equivalent to HBase Service as we may see in the Ambari UI. Kindly let us know the Outcome of the below Steps: Whether Restarting YARN Timeline Service (Which restarts the Embedded HBase Service) helps, Within the YARN Logs, We should have an Embedded HBase Logs for HMaster & RegionServer. Reviewing the Logs may offer additional clue into the reasoning for Embedded HBase concerns, Review [2] for similar concerns wherein the ZNode was Cleared & Service Restarted, Review [3] for Documentation on Deleting the ZNode & Data. Restarting the YARN Timeline Service should initialise the HBase Setup afresh. Regards, Smarak [1] Error Trace: 2021-09-29 05:47:58,587 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=5385 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sandbox-hdp.hortonworks.com,17020,1632894462010 is not running yet
details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdp.hortonworks.com,17020,1628801419880, seqNum=-1 [2] https://community.cloudera.com/t5/Support-Questions/Yarn-Timeline-Service-V2-not-starting/td-p/281368 [3] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-operating-system/content/remove_ats_hbase_before_switching_between_clusters.html
... View more
10-21-2021
12:01 AM
1 Kudo
Hello @rahuledavalath Thanks for using Cloudera Community. As suggested by @9een, Kindly use the [1] created by @willx for migrating the Tables from HDP to CDP. In short, Your Team migrated the Table yet Phoenix relies on "System" Tables as HBase Tables relies on "HBase:Meta". As such, Your Team needs to either use [1] for Exporting Phoenix "System" Tables Or, follow [2] wherein your Team create a Phoenix Table on top of the Exported HBase Table. Either way, We are ensuring Phoenix "System" Tables are reflecting the Table's Metadata as well. Regards, Smarak [1] https://community.cloudera.com/t5/Community-Articles/Phoenix-tables-migration-from-HDP-to-CDP/ta-p/323933 [2] https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
... View more
10-20-2021
11:39 PM
1 Kudo
Hello @Sayed016 Thanks for using Cloudera Community. Based on the Post, Your Team is experiencing Exception [1] for RangerAudits Shard1 Replica1, followed by Successful Connect with ZooKeeper to eventually failing with [2]. Once the same happened, What is being observed by your Team i.e. Whether the Collection RangerAudits Shard1 Replica1 enters "Down" State as opposed to "Active" State. Since the Logs shows RangerAudits Shard1 has no Replica, It's feasible the Issue arises from Consistency concerns between Solr & ZooKeeper. There are few things we wish to verify with your assistance: When Solr reports [1], Whether the ZooKeeper Quorum are Healthy Or, any issues with the ZooKeeper Server wherein the Solr ZooKeeperClient is connected, What happens after [2] for RangerAudits Shard1 Replica1. This is requested above as well, Whether Restarting Solr Service on Host wherein "ranger_audits_shard1_replica_n1" Core is hosted helps in mitigating the "ClusterState Says We Are Leader, But Locally We Don't Think So". The HDP, CDH, CDP Version being discussed in the Post. Regards, Smarak [1] 2021-10-17 04:05:57.006 ERROR (qtp1916575798-2477) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - Updates are disabled. [2] 2021-10-17 04:14:37.504 ERROR (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ClusterState says we are the leader (https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1), but locally we don't think so. Request came from null
... View more
08-06-2021
08:05 PM
Hello @JB0000000000001 As we haven't heard from your side, We assume the Queries posted by you has been addressed & marking the Post as Solved. When you have the time, Feel free to share your Observation with respect to studying or implementing HBase on Cloud Storage. Thanks again for sharing your thoughts on Cloudera Community. - Smarak
... View more
07-26-2021
01:15 AM
Hello @michalm_ Thanks for using Cloudera Community. While I haven't performed such Task, I wish to check if you have reviewed the 3rd Party Script via [1], which uses a Use-Defined Time to find Impala Queries & optionally, Kill them as well. Let us know if it helps. - Smarak [1] https://github.com/onefoursix/kill-long-running-impala-queries
... View more