Member since
04-20-2021
17
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2142 | 01-17-2022 05:51 AM | |
2925 | 01-15-2022 11:32 PM |
02-15-2022
11:57 PM
Can you run fsck with -blocks option to get the datanode address. hadoop fsck /user/oozie/tmp/test2/workflow.xml -files -blocks Login to the datanode and check/grep for that particular blockId/filename on the datanode log. Also grep for blockId/filename on the namenode log.
... View more
02-12-2022
10:42 PM
INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory Above error is for AM and not for executors, hence you need to set the AM memory as spark.yarn.am.memory=2g
... View more
02-10-2022
02:39 AM
2 Kudos
Hi, Error is ldapsearch: command not found Make sure ldapsearch command installed on your node.
... View more
01-27-2022
03:57 AM
Yes the config changes are getting reflected on the services as expected. Can you post the complete error ?
... View more
01-27-2022
02:22 AM
Ideally in this case, increasing the yarn.scheduler.maximum-allocation-mb should solve. But from your comments i can understand that the changes are not reflecting on the yarn service. To confirm the same you can check via this http://active_rm_hostname:8088/conf Under this url search for yarn.scheduler.maximum-allocation-mb and check the value. Make sure the client configs are deployed via ambari. Check the status of YARN service in ambari.
... View more
01-27-2022
01:59 AM
Can you try passing this spark config on your spark shell /spark submit spark.yarn.am.memory=1g Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-26-2022
09:54 PM
Can you check the userlimit of the queue and the max AM resource percentage ? RM UI -> Scheduler -> Expand your queue( take screenshot and attach to this case)
... View more
01-26-2022
09:37 PM
AM Memory is from this property yarn.app.mapreduce.am.resource.mb You can set the am memory and tunning the value of the above property. Make sure to mark the answer as the accepted solution. If it resolves your issue!
... View more
01-18-2022
01:57 AM
I suspect that your datanodes report is slow.Because after restart of namenode you are trigger the datanode restart so it will take time to come up with reports during that interval you can except for missing blocks this will be an intermediate issue. So that you can wait for few more min's and check the namenode ui. Else during the time of issue copy the logs and share it. Make sure to mark the answer as the accepted solution. If it resolves your issue !
... View more
01-17-2022
07:59 PM
Particular datanode has been excluded from write operation. Why excluded ? Need to check the namenode log and datanode log. You can share the logs to debug further. Also check then namenode UI and Datanodes link for errors.
... View more