Member since
08-29-2018
133
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4479 | 11-14-2019 02:54 AM | |
13770 | 11-05-2019 07:51 PM |
11-05-2019
08:21 PM
Hi @wret_1311, Thanks for your response and I appreciate for confirming the solution. I'm glad, it helped you 🙂
... View more
11-05-2019
07:51 PM
Hi @wert_1311, There is an option to just stop a single NodeManager (NM) and clean that usercache there. So, there will not be any applications affected due to this. However, it is worth keeping in mind, even if you stop a single NodeManager, it has some effect on the currently running jobs. The jobs running on that NM will be stopped and will be restarted on another NM. So, jobs will run longer than expected because the containers have to start again somewhere else. Hope this helps.
... View more
11-01-2019
02:50 AM
Hi @wert_1311 , Thanks for asking. Currently yarn.nodemanager.localizer.cache.target-size-mb and yarn.nodemanager.localizer.cache.cleanup.interval-ms triggers deletion service for non-running containers. However, for containers that are running and spilling data to {'yarn.nodemanager.local-dirs'}/usercache/<user>/appcache/<app_id>, the deletion service does not come into action, as a result, filesystem gets full, nodes are marked unhealthy and application gets stuck. I suggest you refer to an internal community article [1] which speaks about something similar. I think that the upstream JIRA [YARN-4540] [2] has this documented and is yet to be unresolved. The general recommendation is to just make that FS big and if it gets full, debug the job that writes too much data into it. Also, It is ok about deleting the usercache dir. Use the following steps to delete the usercache: Stop the YARN service. Log in to all nodes and delete the content of the usercache directories. For example: for i in `cat list_of_nodes_in_cluster`; do ssh $i rm -rf /data?/yarn/nm/usercache/* ; done Verify all usercache directories on all nodes are empty. Start the YARN service. Please let us know if this is helpful. [1] https://community.cloudera.com/t5/Support-Questions/yarn-usercache-folder-became-with-huge-size/td-p/178648 [2] https://issues.apache.org/jira/browse/YARN-4540
... View more
10-31-2019
10:56 AM
Hey, I think, deleting container logs may be a good option to save space. However, if you would like to grab the yarn logs for analysing the old jobs, then you may need those container logs. Also, I think those analyses are required, when a job fails. So, if you think those jobs will not be dug again to gather any of the historic insights, then you may feel free to clear them.
... View more
10-24-2019
09:00 AM
Hey, Can you share the configuration value for "hadoop_authorized_users"? Is it left to the default value, or was there any modification?
... View more
10-22-2019
03:33 AM
Hey @axk , Thanks for letting us know. I'm glad it was helpful 🙂
... View more
10-21-2019
11:27 AM
1 Kudo
Hey, Can you once review, if you have configured the Hbase Service ( in Hive Service) dependency [1]? I have come across scenarios where, if the dependency is not configured, then there is a possibility of such error [2] to occur. [1] https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cdh_ig_hive_hbase.html [2] org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
... View more
10-15-2019
07:48 AM
1 Kudo
Hey, I just came across this link [1], which speaks about the NiFi configurations to ensure Apache NiFi Behind an AWS Load Balancer? Hope it is useful. [1] https://everymansravings.wordpress.com/2018/07/27/apache-nifi-behind-an-aws-load-balancer-w-minifi/
... View more
08-26-2019
06:20 AM
I have faced the same error and these steps have worked for me.
... View more
08-19-2019
08:29 AM
Hey Sankar, Can you tell me if this user had permissions before and you have reinstated the access for the test-user now or it is the first time you are giving access to him? Thanks, Thina
... View more
- « Previous
-
- 1
- 2
- Next »