Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3407 | 09-16-2016 11:56 AM | |
1445 | 09-13-2016 08:47 PM | |
5776 | 09-06-2016 11:00 AM | |
3366 | 08-05-2016 11:51 AM | |
5415 | 08-03-2016 02:58 PM |
05-18-2016
05:19 PM
1 Kudo
@Avraha Zilberma Try setting zookeeper znode property as per your cluster conf, it will help. conf.set("zookeeper.znode.parent", "VALUE") Thanks
... View more
05-18-2016
11:15 AM
@Saurabh Kumar Then I can only think of increasing the yarn.nodemanager.log-dirs size by adding multiple mount points. But still i'm suspecting that something else is also occupying the space.
... View more
05-18-2016
11:08 AM
@Artem Ervits how this can be an answer? its doesn't provide any info on spark sql.
... View more
05-17-2016
08:12 PM
4 Kudos
@Saumil Mayani
Please try setting below parameters and see if that fix the issue. export HADOOP_USER_CLASSPATH_FIRST=true
export HADOOP_CLASSPATH= /full-jar-path/xyz.jar
... View more
05-17-2016
07:30 PM
@Saurabh Kumar Can you please share the below parameters values? yarn.nodemanager.local-dirs hadoop.tmp.dir
... View more
05-17-2016
03:40 PM
Ok, So if I roughly calculate the the size vs no. of jobs i.e if each job generate 100MB of logs on each node then it means you can have up to 1000 jobs running at the same time. Is that the case?. 1. Either some other log occupied the space in same partition. 2. Or the yarn job logs are not getting cleaned up fast. 3. Or you have a big cluster where you are running hundreds of jobs with some extra debugging. If this is the case then you need to reorganized the logging confs and consider increasing/adding space in yarn.nodemanager.log-dirs partition. Can you share the disk usage of parent directories from that 100GB partition? @Saurabh Kumar
... View more
05-17-2016
03:05 PM
Can you please set below property in your workflow and see if that works? @Mark Thorson oozie.use.system.libpath=true
... View more
05-17-2016
02:30 PM
@Mark Thorson Please follow below command and see if this resolve your issue. Reference doc HERE /usr/hdp/current/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://<namenode>:8020 To verify that the sharelibs extracted correctly, run the following command: oozie admin -oozie http://<oozie server host address>:11000/oozie -shareliblist
... View more
05-17-2016
01:48 PM
@Saurabh Kumar ohh so you mean you have 100GB delicately on each NM node for yarn log and YARN log agg is also enabled and still you are facing this issue with local log dir? I think we need to check why that location is not getting cleared after job completion, may be something else occupying the space?
... View more