Member since
07-30-2020
219
Posts
45
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
429 | 11-20-2024 11:11 PM | |
486 | 09-26-2024 05:30 AM | |
1081 | 10-26-2023 08:08 AM | |
1852 | 09-13-2023 06:56 AM | |
2126 | 08-25-2023 06:04 AM |
07-21-2022
07:24 AM
2 Kudos
Hello @loridigia , It seems due to the outage there would be multiple ServerCrashProcedures created for the Regionservers. The Dead region severs with same names are different instances of the Region servers with a different epoch timestamp. As the Hbase Master was also down, it might be possible that it was not able to process the expiration of the Region servers. You might see some Crash procedures waiting to be finished under "Procedures & Locks" section of the Active Hbase Master Web UI. As you have already solved this issue in the past involving zookeeper. I guess you can try this : 1. Stop Hbase 2. Login to zookeeper using #hbase zkcli ( with a valid hbase ticket ) 3. Delete the /hbase-secure znode. rmr /hbase-secure 4. Sideline the entries under HDFS dir. hdfs dfs -mv /hbase/MasterProcWALs/* /tmp. ( Not sure if this was done earlier ) 5. Start Hbase
... View more
07-18-2022
02:00 AM
Hello @KPG1 , The time taken to mark a datanode as stale is give by dfs.namenode.stale.datanode.interval, with a default of 30 seconds. If this is happening with a specific Datanode, you can check if there is any network issues between the Datanode and the Namenode or if the Datanode has any JVM pauses reported by checking the Datanode logs. As a bandaid, you can bump up the above parameter till the underlying problem is solved.
... View more
07-13-2022
10:55 AM
Hello, Based on the above test, I guess you are hitting HBASE-21852 which is still unresolved in the upstream.
... View more
07-13-2022
07:35 AM
Hello, The encoded value for \x would be %5Cx. So can you try using it in the url. Is it connecting via Knox? Also, do upload the curl command output.
... View more
07-04-2022
06:21 AM
2 Kudos
@stale , It looks like a mismatch in the encryption types in your krb5.conf and the AD is causing this. Do check the below 2 Cloudera articles to see if that helps resolving this issue. https://my.cloudera.com/knowledge/ERRORquotCaused-by-GSSException-Failure-unspecified-at-GSS-API?id=272836 https://my.cloudera.com/knowledge/ErrorquotCaused-by-Failure-unspecified-at-GSS-API-level?id=273436
... View more
06-30-2022
03:41 AM
Hello @Grumash , I believe user=cdp_svc_fc_03 is the spark user which no longer exists. So when you are trying to move the file into the trash folder in the home dir, its failing to create the home dir. You need to create the home dir as the super user (hdfs), then chown it to cdp_svc_fc_03, then it should work.
... View more
06-30-2022
03:30 AM
1 Kudo
Hello, There is no one Click solution in CDP to disable kerberos. As you pointed out, it is not recommended to disable kerberos once it's configured. Without kerberos, Ranger and other services might not work properly as Kerberos is the core security in CDP. You can try to follow the below community post to check if that helps. https://community.cloudera.com/t5/Support-Questions/Disabling-Kerberos/m-p/19934#M38077
... View more
06-09-2022
10:33 AM
Hi @KiranMagdum , The last failure noticed is on April 28th. So if you are not seeing any permission/access issue for the disk /disk3/dfs/dn on Datanode log of 10.204.8.11, can you try to restart this datanode role and check if the Namenode UI still prints the Volume as failed.
... View more
06-09-2022
12:58 AM
Hi @Jessica_cisco, The hbase:meta ( system table ) is not online and thus the Hbase Master has not come out of initialisation phase. We will need to assign the region for the hbase:meta table, as this table contains the mapping regarding which region is hosted on a specific Region server. The workflow is failing as this hbase:meta table is not assigned/online. We need to use hbck2 jar ( we get this via support ticket) to assign this region for which the best way is to open a Cloudera Support ticket. https://docs.cloudera.com/runtime/7.2.10/troubleshooting-hbase/topics/hbase_fix_issues_hbck.html Else, ( Doesn't work everytime ) You can try to restart the Region server data-02.novalocal followed by a restart of the Hbase Master to see if the Master is able to assign the meta table. Regards, Robin
... View more
06-08-2022
03:16 AM
Hi @enirys , You will need to add the host entries in the DNS record if freeipa is used to manage the DNS. You can compare the host entries from the other working Datanode in freeipa. Every node in a Data Lake, Data Hub, and a CDP data service should be configured to look up the FreeIPA DNS service for name resolution within the cluster. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-host-entry
... View more