Member since
01-16-2018
553
Posts
37
Kudos Received
91
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
95 | 03-10-2023 07:36 AM | |
65 | 03-10-2023 07:17 AM | |
83 | 02-28-2023 09:04 PM | |
65 | 02-28-2023 08:53 PM | |
64 | 02-28-2023 08:43 PM |
12-07-2021
09:58 PM
Hello @AWlodarczyk Thanks for using Cloudera Community. The Link [1] covers the Minimum Requirement for both rows in their respective families. In other words, Azul JDK8 Family (8.56.0.21 & Above) & Azul JDK11 Family (11.50.19 & Above). They don't apply for Azul JDK13, Azul JDK15 & Azul JDK17 Family [2] for now. In [3], Your Team shall observe the JDK & the respective JDK Family Version being referred. Let us know if the above Post answer your queries. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-java-requirements.html? [2] https://www.azul.com/downloads/?package=jdk [3] https://supportmatrix.cloudera.com/
... View more
12-07-2021
01:21 PM
Hello @lbourgeois Apology for the delayed response & Thank You for the details. Internally, I could generate " HTTP ERROR 403 Forbidden", if I remove DEAdmin & DEUser from the Environment associated with the CDE Service for the User, for which the [User]:[Pass] is being passed. Once the above Privilege were added back to the User at Environment Level & " Synchronize Users" Operation Completed successfully, the Token was available (Wait for ~5 Minutes before retrying the Curl Command). Kindly review & let us know if the above Step works for you. Regards, Smarak
... View more
12-07-2021
12:59 PM
Hello @rootuser, Thanks for using Cloudera Community. Based on the Post, You are trying to use CopyTable to copy HBase Table(s) from 1 Cluster to another Cluster, wherein 1 Mapper is being observed. Please confirm if the Source Table has 1 Region only. Additionally, Confirm if CopyTable on a Table with >1 Regions (Say, 5 Regions) creates 1 Mapper or 5 Mappers. Also, Please state the HBase Version being used by your Team. Additionally, Share the Timeout being observed by your Team. As far as I recall, HBase uses 1 Mapper per Region. As such, It's likely the Source Table has 1 Region only. In such case, Increasing the Region Split by Pre-Split or Increasing the Timeout should help. Regards, Smarak
... View more
12-01-2021
12:46 AM
Hello @RafaelDiaz We hope the Post was helpful for you & marking the same as Resolved. If your Team continue to face the issue with HiveAccessControlException, Do update the Post & we can check accordingly. Regards, Smarak
... View more
11-30-2021
08:19 AM
Hello @lbourgeois Thanks for using Cloudera Community. Based on the Post, you are using to get CDE API Access Token & the Command just hangs. In short, You entered the Workload Password after using your Environment CDE Base URL followed by the KnoxToken Endpoint: ### curl -u <Your-Workload-User> <Your-CDE-Base-URL>/gateway/authtkn/knoxtoken/api/v1/token Kindly confirm if the behavior is Consistent across all CDE Services & all Users. Whether the DataLake (FreeIPA & IDBroker) is Up & Running. Additionally, Confirm the CDE Version being used. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/api-access/topics/cde-api-get-access-token.html
... View more
11-16-2021
10:36 PM
Hello @xgxshtc Thanks for the Update. If you try to access the WAL File (For which the DFSOutputStream reports Premature EOF) via "hdfs dfs -cat/-head", Is the Command running successfully ? The FSCK Output can be modified to include " -openforwrite" to show the details of the 10 Files currently opened. Regards, Smarak
... View more
11-15-2021
12:05 AM
Hello @xgxshtc Thanks for the Update. By Sideline, We meant ensuring the HBase WAL Directory "/hbase/WALs" is Empty. Let us know if the below Steps helps: Stop HBase RegionServers, Sideline the WAL Directory Contents i.e. There shouldn't be any Directories with "/hbase/WALs", Restart HBase RegionServers. Additionally, You reinstalled the Cluster & yet observed the concerned Issue again. This likely indicates the HDFS State may be Unhealthy. Any Chance you can review the HDFS FSCK on the HBase WAL Directory [1] to confirm whether the Blocks associated with the HBase WAL Files are Healthy. Regards, Smarak [1] https://hadoop.apache.org/docs/r1.2.1/commands_manual.html#fsck
... View more
11-13-2021
06:51 AM
Hello @xgxshtc We observed you have posted the concerned ask in a New Post [1] as the concerned Post is ~4Years Old. While the Current Post is Unresolved, We shall wait on your Team's review on [1] before confirming the Solution on the Current Post as well. Regards, Smarak [1] https://community.cloudera.com/t5/Support-Questions/Hbase-regionserver-shutdown-after-few-hours/m-p/330070/highlight/false#M230589
... View more
11-13-2021
06:46 AM
Hello @xgxshtc Thanks for using Cloudera Community. Based on the Post, RegionServer is reporting EOFException | "Bad DataNode" while replaying WALs "/hbase/WALs". It appears the HDFS Blocks are having issues. To fix the Issue, Your Team can Sideline the Contents of the "/hbase/WALs" (Specific "/hbase/WALs/jfhbase03,60020,1636456250380") & restart the concerned RegionServer. If all RegionServers are Impacted, Sideline each of the RegionServer Directories from "/hbase/WALs". Note that the WALs hold the Edits not yet persisted to Disk & Sidelining the WAL Directories (1 WAL per RegionServer) may incur Data Loss. Additionally, Review the HDFS FSCK of HDFS files with WALs "/hbase/WALs" & fix any Corrupt/Missing Blocks. After ensuring the HDFS FSCK of WALs "/hbase/WALs" is Healthy, Your Team can restart the HBase RegionServers. Regards, Smarak
... View more
11-11-2021
09:18 AM
Hello @Faizan_Ali Thanks for using Cloudera Community. Based on the Post, ZooKeeper Service is failing to start on CDH v5.16. The Screenshot covers the "stdout" file. Kindly share the "stderr" file or the "Role Log" file for reviewing the issue with ZooKeeper Service. If your Team have Solved the Post, Kindly share the Issue & Steps performed to Mitigate the Issue. Your assistance would help our fellow Community Users. Regards, Smarak
... View more
11-11-2021
09:14 AM
Hello @drgenious Thanks for using Cloudera Community. We hope the response by @balajip was helpful for your ask. Additionally, We wish to share a few details: Your Question points to "How To Make Query Faster". Ideally, Impala would use Parallelism for executing a Query in fragments across Executors. As such, the 1st review should be using Impala Query Profile of the SQL to identify the Time taken at each Phase of SQL Execution. Refer [1] & [2] for few Links around Impala Query Profile. Once the Phase taking the Most Time is identified, Fine-Tune accordingly. Simply increasing the Impala Executors Daemon or using a Dedicated Coordinator may not be helpful, unless the SQL's Slow Fragment(s) are identified. Kindly review & let us know if you have any further ask in the Post. Regards, Smarak [1] https://cloudera.ericlin.me/2018/09/impala-query-profile-explained-part-1/ [2] https://docs.cloudera.com/runtime/7.2.10/impala-reference/topics/impala-profile.html
... View more
11-10-2021
09:53 PM
Hello @yacine_ Thanks for sharing the Solution along with the Root Cause as well. We shall mark the Post as Solved for now to ensure fellow Community Users can use the Solution as well. Regards, Smarak
... View more
11-10-2021
09:48 PM
Hello @vishal6196 Thanks for using Cloudera Community. Based on the Post, the Phoenix Delete Command is reporting more records being deleted than the Select Count(*) returns. Thank You for sharing the HDP & Phoenix Version as well. Few Queries for your Team: Whether the Observation is made for all Tables or Selective Table(s), Explain Output of the Select & Delete SQL, Any Index of the Phoenix Table being deleted, If Performing MajorCompaction on the Table before Deleting (Just for Sanity Check) shows any Difference with respect to the Rows Deleted via Delete SQL. Regards, Smarak
... View more
11-10-2021
09:38 PM
Hello @RafaelDiaz Thanks for using Cloudera Community. Based on the Post, Create Table via Hue works yet fails in R with "dbExecute" Function. While I haven't tried using "dbExecute" Function in R, the Error shows HiveAccessControlException denying the User on "default/iris". You haven't stated the CDH/CDP Version being used yet I assume your Team is using CDP with Ranger. First, Kindly check if passing the DB Name "cld_ml_bi_eng" with Table Name "iris" in "dbExecute" Function works. Since I haven't used "dbExecute" Function in R, Let us know if passing DB Name with Table Name like <DBName.TableName> is feasible. Secondly, Kindly check in Ranger for the Privilege of User "RDIAZ" for "CREATE" Privilege. Try including the DB Scope to "*" & Confirm the Table being Created with the Database ("default" or "cld_ml_bi_eng"). Accordingly, We can proceed further. Kindly review & share the Outcome from the above 2 suggestion. If your Team have Fixed the Issue already, We would appreciate your Team sharing the details for fellow Community Users as well. Regards, Smarak
... View more
11-01-2021
12:43 PM
Hello @SteffenMangold Kindly let us know if the Post was Helpful for your Team. If Yes, Please mark the Post as Solved. If No, Do share the Outcome of the Command shared in our Post. Accordingly, We shall review & get back to you. Regards, Smarak
... View more
11-01-2021
12:40 PM
Hello @Ma_FF Kindly let us know if the Post was Helpful to Identify the RegionServer Thread spiking the CPU Usage of the RegionServer JVM. If Yes, Please mark the Post as Solved. Regards, Smarak
... View more
11-01-2021
12:39 PM
Hello @paresh Thanks for using Cloudera Community. Based on the Post, HBase Service is being impacted owing to the Trace shared by you in the Post with the Solutions attempted. While Complete Logs helps, It appears the Region " 1595e783b53d99cd5eef43b6debb2682" Replay via "recovered.wals" is being interrupted owing to EOFException. When a Region is being Opened, any Contents in the "recovered.wals" are Replayed by reading them & pushing to MemStore. Once the Edits are persisted from MemStore to an Hfile, the "recovered.wals" are Removed. With the Possibility of DataLoss, You may attempt to Stop HBase > Sideline the RecoveredEdits "hdfs://ha-cluster:8020/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals" & MasterProcWALs (To avoid any Replay of associated PID) > Start HBase > Verify the Status. Note the "DataLoss" is likely from removing the RecoveredEdits File not persisted yet. Additionally, You may use WALPlayer [1] for replaying the Contents of RecoveredEdits as well. Regards, Smarak [1] https://hbase.apache.org/book.html#walplayer
... View more
10-23-2021
08:14 AM
Hello @amit_hadoop Thanks for using the Cloudera Community. It appears you are using [1] Post for asking the same Question as well. We have replied to you on the concerned Post. To avoid Duplication, We shall mark the Post as Closed. Regards, Smarak [1] https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-hbase-ipc-ServerNotRunningYetException/m-p/328376#M230250
... View more
10-23-2021
08:09 AM
Hello @Malvin We are marking the Post as Solved as Region Server Grouping is the Only Way to Segregate the Region Assignment at Region Server Level. If your Team encounter any issues, Feel free to Update the Post & we shall get back to your Team accordingly. Thank You again for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
08:04 AM
Hello @rahuledavalath We are marking the Post as Solved. If your Team have any further concerns, Please update the Post, if your Team face any issues with either of the Recommendation shared [Migrate the System Tables Or Create a Phoenix Table afresh on Top of HBase Table]. Thank You for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:54 AM
Hello @Rjkoop As stated by @willx, Visibility Labels aren't supported & Will has shared the Link. As such, We are marking the Post as Solved. Having said that, Your Team may post any further concerns in the Post. We shall review & get back to your Team accordingly. Thanks for using Cloudera Community. Regards, Smarak
... View more
10-23-2021
07:50 AM
1 Kudo
Hello @Ma_FF Thanks for using Cloudera Community. Based on the Post, 1 Region Server is using High CPU. As requested by @PrathapKumar, Review the same. Additionally, Your Team can perform the below: (I) When the Region Server JVM reports High CPU, Open "top" Command for the Region Server PID, (II) Use "Shift H" to open the Thread View of the PID. This would show the Threads within the Region Server JVM with CPU Usage, (III) Monitor the Thread View & Identify the Thread hitting the Max CPU Usage, (IV) Take Thread Dump | JStack of Region Server PID & Compare the Thread with the "top" Thread View consuming the Highest CPU. The above Process would allow you to identify the Thread contributing towards the CPU Usage. Compare the same with other Region Server & your Team can make a Conclusive Call to identify the reasoning for CPU Utilization. Howsoever Logs are reviewed, Narrowing the Focus of JVM review would assist in identifying the Cause. Review shared Link for additional reference. Kindly review & share your Observation in the Post. Regards, Smarak [1] https://www.infoworld.com/article/3336222/java-challengers-6-thread-behavior-in-the-jvm.html [2] https://blogs.manageengine.com/application-performance-2/appmanager/2011/02/09/identify-java-code-consuming-high-cpu-in-linux-linking-jvm-thread-and-linux-pid.html [3] https://blog.jamesdbloom.com/JVMInternals.html
... View more
10-21-2021
10:32 AM
Hello @Sam7 We hope the SQL shared by @BennyZ helped your Team meet the Use-Case. Showing the SQL Output from the SQL shared in a Sample Table: As such, We are marking the Post as Solved. Regards, Smarak
... View more
10-21-2021
06:18 AM
1 Kudo
Hello @Sayed016 Thanks for the response. In short, your Team identified GC Pauses during the concerned period wherein Solr Service was losing Connectivity with ZooKeeper. Your Team increased the zkClientTimeout to 30 Seconds & now, the Leader is elected for the Collection. In short, the Solr Service was impacted owing to GC Pauses of Solr JVM & the same has been addressed by your Team. Thank You for sharing your experience on the Community, which would help our fellow Community Users as well. Let us know if we are good to close the Post as Solved as well. Regards, Smarak
... View more
10-21-2021
12:22 AM
1 Kudo
Hello @Malvin Thanks for using Cloudera Community. Based on the Post, You wish to have RegionServers to hold different number of Regions depending on the Hardware Spec of the Host. Note that RegionServer JVM runs with a Heap allocation, which is expected to be Uniform across the RegionServer(s). You may consider using RegionServer Grouping [1] to ensure Regions from Tables with Large Region Count are mapped to few RegionServers. The Link offers Implementation & Use-Cases details as well. Regards, Smarak [1] https://hbase.apache.org/book.html#rsgroup
... View more
10-21-2021
12:16 AM
1 Kudo
Hello @amit_hadoop Thanks for using Cloudera Community. Based on the Post, You installed HDP Sandbox 3.0 & observed Timeline Service v2 Reader reporting [1]. If you observe the Port mentioned, It says "17020" & HBase RegionServer runs with "16020" Port by default. YARN Timeline Service uses an Embedded HBase Service, which isn't equivalent to HBase Service as we may see in the Ambari UI. Kindly let us know the Outcome of the below Steps: Whether Restarting YARN Timeline Service (Which restarts the Embedded HBase Service) helps, Within the YARN Logs, We should have an Embedded HBase Logs for HMaster & RegionServer. Reviewing the Logs may offer additional clue into the reasoning for Embedded HBase concerns, Review [2] for similar concerns wherein the ZNode was Cleared & Service Restarted, Review [3] for Documentation on Deleting the ZNode & Data. Restarting the YARN Timeline Service should initialise the HBase Setup afresh. Regards, Smarak [1] Error Trace: 2021-09-29 05:47:58,587 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=36, started=5385 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server sandbox-hdp.hortonworks.com,17020,1632894462010 is not running yet
details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdp.hortonworks.com,17020,1628801419880, seqNum=-1 [2] https://community.cloudera.com/t5/Support-Questions/Yarn-Timeline-Service-V2-not-starting/td-p/281368 [3] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-operating-system/content/remove_ats_hbase_before_switching_between_clusters.html
... View more
10-21-2021
12:01 AM
1 Kudo
Hello @rahuledavalath Thanks for using Cloudera Community. As suggested by @9een, Kindly use the [1] created by @willx for migrating the Tables from HDP to CDP. In short, Your Team migrated the Table yet Phoenix relies on "System" Tables as HBase Tables relies on "HBase:Meta". As such, Your Team needs to either use [1] for Exporting Phoenix "System" Tables Or, follow [2] wherein your Team create a Phoenix Table on top of the Exported HBase Table. Either way, We are ensuring Phoenix "System" Tables are reflecting the Table's Metadata as well. Regards, Smarak [1] https://community.cloudera.com/t5/Community-Articles/Phoenix-tables-migration-from-HDP-to-CDP/ta-p/323933 [2] https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
... View more
10-20-2021
11:50 PM
Hello @SteffenMangold Thanks for using Cloudera Community. Based on the Post, Your Team wish to assign a Region (Which isn't assigned to any RegionServer) without any HBase Restart. We wish to confirm whether your Team tried using the following approaches: HBCK2 Assign as documented via [1]. The Command requires the Region ID as argument & returns a PID, which can be viewed via HMaster UI "Locks & Procedures" or "list_procedures" via HBase Shell for Success/Failure. Within HBase Shell, There are "assigns" Or "move" Command allowing Destination Region Server to be specified as well. Your Team may have tried the above Command yet the Outcome of the above Command would confirm if there is any Underlying concerns with the Region, which requires additional intervention like HBase Service Restart (Or, HMaster Restart which should spawn an Assignment Manager Thread afresh). Regards, Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2#running-hbck2
... View more
10-20-2021
11:39 PM
1 Kudo
Hello @Sayed016 Thanks for using Cloudera Community. Based on the Post, Your Team is experiencing Exception [1] for RangerAudits Shard1 Replica1, followed by Successful Connect with ZooKeeper to eventually failing with [2]. Once the same happened, What is being observed by your Team i.e. Whether the Collection RangerAudits Shard1 Replica1 enters "Down" State as opposed to "Active" State. Since the Logs shows RangerAudits Shard1 has no Replica, It's feasible the Issue arises from Consistency concerns between Solr & ZooKeeper. There are few things we wish to verify with your assistance: When Solr reports [1], Whether the ZooKeeper Quorum are Healthy Or, any issues with the ZooKeeper Server wherein the Solr ZooKeeperClient is connected, What happens after [2] for RangerAudits Shard1 Replica1. This is requested above as well, Whether Restarting Solr Service on Host wherein "ranger_audits_shard1_replica_n1" Core is hosted helps in mitigating the "ClusterState Says We Are Leader, But Locally We Don't Think So". The HDP, CDH, CDP Version being discussed in the Post. Regards, Smarak [1] 2021-10-17 04:05:57.006 ERROR (qtp1916575798-2477) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - Updates are disabled. [2] 2021-10-17 04:14:37.504 ERROR (qtp1916575798-2325) [c:ranger_audits s:shard1 r:core_node2 x:ranger_audits_shard1_replica_n1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: ClusterState says we are the leader (https://192.168.0.17:8985/solr/ranger_audits_shard1_replica_n1), but locally we don't think so. Request came from null
... View more
08-06-2021
08:05 PM
Hello @JB0000000000001 As we haven't heard from your side, We assume the Queries posted by you has been addressed & marking the Post as Solved. When you have the time, Feel free to share your Observation with respect to studying or implementing HBase on Cloud Storage. Thanks again for sharing your thoughts on Cloudera Community. - Smarak
... View more