Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2787 | 09-01-2017 06:26 AM | |
1699 | 05-04-2017 07:09 AM | |
1460 | 09-12-2016 05:58 PM | |
2069 | 07-22-2016 05:22 AM | |
1626 | 07-21-2016 07:50 AM |
05-30-2016
11:29 AM
@Sowmya Ramesh @Benjamin Leonhardi, I found the solution for this issue. Actually Before upgrade the value for "oozie.wf.rerun.failnodes" was "false". But after upgrade to HDP-2.3.4, value for "oozie.wf.rerun.failnodes" is "true",so that only failed action node in Oozie workflow instance run thus to prevent the rerun of successful action in Oozie. it is required to set following property in properties section in Process entity.
<property name="oozie.wf.rerun.failnodes" value="false"/>
... View more
05-18-2016
10:49 AM
@Ravi Mutyala: I checked and as I have give above we have this value comma separated for disk only. So not sure what is going wrong and why it is getting filled up even for a single big job.
... View more
05-18-2016
06:06 AM
@Jitendra Yadav: We have following value for the above required properties. yarn.nodemanager.local-dirs=/grid01/hadoop/yarn/log,/grid03/hadoop/yarn/log,/grid04/hadoop/yarn/log,/grid05/hadoop/yarn/log,/grid06/hadoop/yarn/log,/grid07/hadoop/yarn/log,/grid08/hadoop/yarn/log,/grid09/hadoop/yarn/log,/grid10/hadoop/yarn/log And I could not find any value for hadoop.tmp.dir.
... View more
05-17-2016
05:25 PM
@Jitendra Yadav: Let me explain my issue little bit more. We have total 52 worker nodes and each node has 100GB dedicated for /var/log. And there use to be a very big hive query(with 20 or more left join or right joins) which users run and during a single query run it create metadata(~100 GB) with many containers. This is the cause of issue and it trigger alerts. Once this job will fail or complete then immediately logs will clean.
... View more
05-17-2016
03:35 PM
@Ravi Mutyala: Is there any hortonwork doc or reference where this recomanation is given ?
... View more
05-17-2016
03:17 PM
@Jitendra Yadav: yes We have enabled 100GB for all worker nodes. And data is getting cleaned once job compete or failed. But my query is many users are running a very big query and their jobs are consuming whole 100 GB even more than that. Because of that jobs are failing.
... View more
05-17-2016
01:41 PM
@Ravi Mutyala: Yes, I can do it but problem is whenever one of configured dirs will be 100% utilized imidiately we will keep on getting alerts email. And problem is we can not disable those alerts as we have to monitor.
... View more
05-17-2016
01:38 PM
Thanks @Jitendra Yadav. I have done it but after that also it it getting 100% used.
... View more
05-17-2016
01:06 PM
Team, Actually there are multiple jobs running on our servers and during running jobs are creating more staging data in local /var/log/yarn/log dir. I understand it is because of container and yarn.nodemanager.log-dirs property. We have 100GB for this location but still it is getting full, so is there anyway where we can redirect it to hdfs ? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache YARN
-
HDFS