Member since
07-30-2020
216
Posts
40
Kudos Received
59
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
297 | 09-26-2024 05:30 AM | |
1002 | 10-26-2023 08:08 AM | |
1758 | 09-13-2023 06:56 AM | |
1998 | 08-25-2023 06:04 AM | |
1464 | 08-17-2023 12:51 AM |
07-13-2022
07:35 AM
Hello, The encoded value for \x would be %5Cx. So can you try using it in the url. Is it connecting via Knox? Also, do upload the curl command output.
... View more
07-04-2022
06:21 AM
2 Kudos
@stale , It looks like a mismatch in the encryption types in your krb5.conf and the AD is causing this. Do check the below 2 Cloudera articles to see if that helps resolving this issue. https://my.cloudera.com/knowledge/ERRORquotCaused-by-GSSException-Failure-unspecified-at-GSS-API?id=272836 https://my.cloudera.com/knowledge/ErrorquotCaused-by-Failure-unspecified-at-GSS-API-level?id=273436
... View more
06-30-2022
03:41 AM
Hello @Grumash , I believe user=cdp_svc_fc_03 is the spark user which no longer exists. So when you are trying to move the file into the trash folder in the home dir, its failing to create the home dir. You need to create the home dir as the super user (hdfs), then chown it to cdp_svc_fc_03, then it should work.
... View more
06-30-2022
03:30 AM
1 Kudo
Hello, There is no one Click solution in CDP to disable kerberos. As you pointed out, it is not recommended to disable kerberos once it's configured. Without kerberos, Ranger and other services might not work properly as Kerberos is the core security in CDP. You can try to follow the below community post to check if that helps. https://community.cloudera.com/t5/Support-Questions/Disabling-Kerberos/m-p/19934#M38077
... View more
06-09-2022
10:33 AM
Hi @KiranMagdum , The last failure noticed is on April 28th. So if you are not seeing any permission/access issue for the disk /disk3/dfs/dn on Datanode log of 10.204.8.11, can you try to restart this datanode role and check if the Namenode UI still prints the Volume as failed.
... View more
06-09-2022
12:58 AM
Hi @Jessica_cisco, The hbase:meta ( system table ) is not online and thus the Hbase Master has not come out of initialisation phase. We will need to assign the region for the hbase:meta table, as this table contains the mapping regarding which region is hosted on a specific Region server. The workflow is failing as this hbase:meta table is not assigned/online. We need to use hbck2 jar ( we get this via support ticket) to assign this region for which the best way is to open a Cloudera Support ticket. https://docs.cloudera.com/runtime/7.2.10/troubleshooting-hbase/topics/hbase_fix_issues_hbck.html Else, ( Doesn't work everytime ) You can try to restart the Region server data-02.novalocal followed by a restart of the Hbase Master to see if the Master is able to assign the meta table. Regards, Robin
... View more
06-08-2022
03:16 AM
Hi @enirys , You will need to add the host entries in the DNS record if freeipa is used to manage the DNS. You can compare the host entries from the other working Datanode in freeipa. Every node in a Data Lake, Data Hub, and a CDP data service should be configured to look up the FreeIPA DNS service for name resolution within the cluster. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-host-entry
... View more
06-07-2022
11:25 AM
Hi @enirys , It looks like a DNS resolution issue. Could you check if this gets resolved by following this article https://my.cloudera.com/knowledge/ERROR-quot-is-not-authorized-for-protocol-interface?id=304462
... View more
06-07-2022
11:11 AM
1 Kudo
Hi @Jessica_cisco , As per the screenshot, it reports that the Region server on this host has failed to start. You can login to that host and confirm if there is a Region server process running. #ps -ef | grep regionserver If you don't see any process, try to restart this Region server from CM and if it still fails, please check the stderr and the role log of this Region server for more clues.
... View more
06-07-2022
01:30 AM
Hello @Jessica_cisco , You can check if cloudera-manager.repo is present on this host under /etc/yum.repos.d/. If not, copy this repo from a working node. If you run the below command on this host, it should show you the repo from where it will download the agent package. # yum whatprovides cloudera-manager-agent cloudera-manager-agent-7.6.1-24046616.el7.x86_64 : The Cloudera Manager Agent Repo : @cloudera-manager Once the above is confirmed, You can use the below doc for instructions. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/installation/topics/cdpdc-manually-install-cm-agent-packages.html
... View more