Member since
09-20-2018
354
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2415 | 05-14-2019 10:47 AM |
10-11-2019
04:17 AM
Hi, We understand that all the required properties are enabled and you can change the interval and max-age depends on your requirement, Added could you tell us how old days files are not getting deleted? if it was too old, you may need to manually delete the .inprogress files from the location. Thanks AKR
... View more
10-11-2019
03:59 AM
Hi, Did you tried taking backup of the folder /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state. and deleting all the contents under /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state and restarting affected node manager? Please share the updates Thanks AKR
... View more
10-11-2019
03:09 AM
Hi, Are you still getting the same error even though after increasing the Overhead memory? Could you please share the Error messages after increasing the Overhead memory / Executor/ Driver memory? Thanks AK
... View more
09-08-2019
07:22 AM
Hi, Could you please share the Entire application logs to analyse further? Thanks AKR
... View more
09-08-2019
06:58 AM
Hi, Can you share the error message from spark history server logs when the spark history UI is crashing? Thanks AKR
... View more
09-05-2019
08:37 AM
Hi, Exit Code 143 happens due to multiple reasons and one of them is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked. For more please refer to this link. https://stackoverflow.com/questions/42972908/container-killed-by-the-applicationmaster-exit-code-is-... Thanks AKR
... View more
09-05-2019
08:34 AM
Hi, Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked. For more please refer to this link. https://stackoverflow.com/questions/42972908/container-killed-by-the-applicationmaster-exit-code-is-143 Thanks AKR
... View more
09-04-2019
10:31 AM
Hi, Check for the total no of applications in the Application history path, if the total no of files is more try to increase the heap size and look whether it works. Alternatively look for the spark history server logs too for any errors. Thanks AKR
... View more
09-03-2019
11:19 AM
Hi, We need to review the Resource manager logs to look for the Errors if any,. Also we need to view the Resource manager webUI to check for the resource utilization and Memory utiilization in queue wise on the jobs submitted. Thanks AKR
... View more
09-03-2019
11:09 AM
Hi, To view the spark logs of an completed application you can view the logs by running the below command yarn logs -applicationId application_xxxxxxxxxxxxx_yyyyyy -appOwner <userowner> > application_xxxxxxxxxxxxx_yyyyyy.log Thanks AKR
... View more