Member since
04-20-2021
17
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2069 | 01-17-2022 05:51 AM | |
2780 | 01-15-2022 11:32 PM |
02-15-2022
11:57 PM
Can you run fsck with -blocks option to get the datanode address. hadoop fsck /user/oozie/tmp/test2/workflow.xml -files -blocks Login to the datanode and check/grep for that particular blockId/filename on the datanode log. Also grep for blockId/filename on the namenode log.
... View more
02-12-2022
10:42 PM
INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory Above error is for AM and not for executors, hence you need to set the AM memory as spark.yarn.am.memory=2g
... View more
02-10-2022
02:48 PM
1 Kudo
You need to install openldap-clients Linux package, which includes ldapsearch tool. yum install openldap-clients You should also pay attention to this documentation while you are enabling the Kerberos. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_sg_intro_kerb.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--76dd
... View more
01-26-2022
09:54 PM
Can you check the userlimit of the queue and the max AM resource percentage ? RM UI -> Scheduler -> Expand your queue( take screenshot and attach to this case)
... View more
01-25-2022
04:25 AM
Hi, It was already able to access the SCHEMA_VERSION Table and get the record from the table. Solved the problem by downgrading the db version. Regards
... View more
01-18-2022
01:57 AM
I suspect that your datanodes report is slow.Because after restart of namenode you are trigger the datanode restart so it will take time to come up with reports during that interval you can except for missing blocks this will be an intermediate issue. So that you can wait for few more min's and check the namenode ui. Else during the time of issue copy the logs and share it. Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-18-2022
12:14 AM
Hi @Meepoljd, glad to know that your issue was fixed. Can you please accept @Amithsha's response as a solution? It will make it easier for others to find the answer in the future.
... View more
01-17-2022
07:59 PM
Particular datanode has been excluded from write operation. Why excluded ? Need to check the namenode log and datanode log. You can share the logs to debug further. Also check then namenode UI and Datanodes link for errors.
... View more
01-16-2022
12:02 AM
Hadoop hosts with different OS version is not recommended because higher OS version might come up with higher JAVA version hence that can cause inconsistencies. Due to this either you can see either performance improvement or performance degradation on both java and OS level. Because all hadoop processes are isolated in JVM env. But still we can run a cluster with different OS versions but make sure the OS is supported. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_os_requirements.html#os_requirements Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-15-2022
11:32 PM
HI, YARN Graceful decommission will wait for jobs to complete. You can pass the timeout value so that YARN will start decommission after x seconds. If no jobs running within x secs then automatically YARN will start decommission without waiting for timeout to happen. CM -> Clusters -> yarn -> Configuration -> In search bar ( yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs) Set the value and save the configuration and do restart to deploy configs. To decommission a specific host/more hosts CM -> Clusters -> yarn -> Instances (Select the hosts that you want to decommission) Click -> Actions for selected hosts -> Decommission In case you want to decommission all the roles of a host then follow this doc https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_mc_host_maint.html#decomm_host Make sure to mark the answer as the accepted solution. If it resolves you issue !
... View more