Member since
10-11-2022
83
Posts
23
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
379 | 09-17-2024 08:10 PM | |
851 | 07-31-2024 10:04 AM | |
551 | 07-28-2024 10:56 PM | |
1966 | 06-12-2024 12:40 AM | |
1907 | 04-21-2024 11:25 PM |
04-17-2024
01:51 AM
1 Kudo
Hi @snm1523 , correct you have to manually delete those files in HDFS, we are aware of it and currently we are working on that issue.
... View more
04-15-2024
09:11 AM
Hi @snm1523 those configs are for spool directories, these configs won't help you. The audit logs are stored in HDFS, It's Solr collections that will store them in HDFS, if your solr is configured to store them in HDFS then by default all the auditing will happen in HDFS. You can refer to the below doc to check where and how solr collections are stored. https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/security-ranger-auditing/topics/security-ranger-audit-migrating-storage-data-location.html
... View more
04-15-2024
01:08 AM
1 Kudo
Hi @soumM can you please check if both the cluster nodes are in /etc/hosts file on each node. We need a full error stack to debug this.
... View more
04-07-2024
03:31 AM
1 Kudo
Hi @josr89 you need to look at name node logs for the ranger plugin sync up, basically every service runs a ranger policy refresher which will sync the policies from the ranger, it's kind of a pull architecture, services will pull policies from ranger and store them in its cache. So try looking at Name node logs and search for ranger refresher logs that should give you some idea.
... View more
04-07-2024
02:45 AM
1 Kudo
Hi, @schrippe can you please run the ldapsearch command on this particular OU "OU=Zentral,OU=Gruppen,DC=bk,DC=datev,DC=de" and check if you are getting your missing group here or not, it could be the group is present on different OU level. This is your Group search base config "OU=Zentral,OU=Gruppen,DC=bk,DC=datev,DC=de" so run the ldapsearch and verify the o/p.
... View more
04-07-2024
02:40 AM
1 Kudo
Hi, @Juanes this might also be coming from the DNS side, check the agent logs for any DNS test failures and try to do nslookup from another node to this node IP address and then HQDN and check whether the DNS server is resolving the hostname and IP address correctly or not. as @upadhyayk04 suggested try the #hostname -f or #hostnamectl commands to get the correct FQDN. # nslookup <IP-address>/FQDN - if this is not resolving correctly then you may need to check the DNS server whats happening.
... View more
04-06-2024
01:31 AM
1 Kudo
Hi @yagoaparecidoti please try the below API call with a view as FULL_WITH_HEALTH_CHECK_EXPLANATION parameter, please change the cluster name, host IP, API version etc, this should show all the kinds of health test running on the cluster. curl -X GET "http://10.129.116.234:7180/api/v54/clusters/xsczx/services?view=FULL_WITH_HEALTH_CHECK_EXPLANATION" -H "accept: application/json"
... View more
03-26-2024
11:57 AM
Hi @datafiber it seems like your Namenode is in Safe mode, not sure why it went into safe mode, but you can try taking it out manually and then retry the operation and monitor the logs. run the below commands from NN. # hdfs dfsadmin -safemode leave # hdfs dfsadmin -safemode status
... View more
03-26-2024
11:51 AM
Hi, @user2024 I don't the canary file is gonna cause this issue, the blocks that are corrupt/missing are now lost and cannot be recovered, you can manually delete those blocks by identifying them using the below command and run the hdfs balancer on HDFS so that NN will balance the new blocks across the cluster. # hdfs fsck -list-corruptfileblocks You can also refer to the below article. https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files
... View more
03-20-2024
10:54 PM
1 Kudo
Hi @josr89 you can provide WRITE access to user "userA" to the below path under the "cm_hdfs" repository in Ranger, wait for plugins to sync and then rerun the operation. path: /apps/hbase/data/staging
... View more
- « Previous
- Next »