Support Questions

Find answers, ask questions, and share your expertise

Yarn jobs fails with "Not able to initialize user directories in any of the configured local directories"

avatar
Contributor

I am trying to run a benchmark job, with the following command : yarn jar /path/to/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 1000 -resFile /tmp/TESTDFSio.txt but my job fails with following error messages :

16/01/22 15:08:47 INFO mapreduce.Job: Task Id : attempt_1453395961197_0017_m_000008_2, Status : FAILED Application application_1453395961197_0017 initialization failed (exitCode=255) with output: main : command provided 0

main : user is foo

main : requested yarn user is foo

Path /mnt/sdb1/yarn/local/usercache/foo/appcache/application_1453395961197_0017 has permission 700 but needs permission 750.

Path /var/hadoop/yarn/local/usercache/foo/appcache/application_1453395961197_0017 has permission 700 but needs permission 750. Did not create any app directories

Even when I change these directories permission to 750, I get errors. Also these caches dont get cleaned off, after one job'and create collisons when running the next job. Any insights ?

12 REPLIES 12

avatar
Master Mentor

@Anilkumar Panda can you run service checks for Mapreduce2, YARN and HDFS. You should restart YARN service and it will change permissions as necessary unless there are other issues, in that case, we need to check umask and mount options on your disk.

avatar
Contributor

@Artem Ervits The service checks run fine. Also we have started the services many time, the issue still persists. umask value in all nodes is set to 0022 .

What are the mount options we should check ?

avatar
Master Mentor

@Anilkumar Panda please paste the directory screenshots and /etc/fstab

avatar
Master Mentor
@Anilkumar Panda

Try this and see if it helps.

chmod -R 750 /mnt/sdb1/yarn

chmod -R 750 /var/hadoop/yarn

avatar
Contributor

Have tried that, also the issue is, when a new folder is created, the permissions dont apply, hence the job starts failing.

Some cleaning up is not happening correctly, but I am unable to locate the issue 😞

avatar
Master Mentor

@Anilkumar Panda

Let's try this

check for yarn.nodemanager.local-dirs

for user foo delete everything under usercache for the user in all data nodes.

[root@phdns02 conf]# ls -l /hadoop/yarn/local/usercache/foo/

total 8

drwxr-xr-x. 2 yarn hadoop 4096 Jan 23 15:06 appcache

drwxr-xr-x. 2 yarn hadoop 4096 Jan 23 14:01 filecache

[root@phdns02 conf]#

avatar
Contributor
@Neeraj Sabharwal

Deleting the directory makes the job work for once, but afterwards it fails again.

avatar
Master Mentor

@Anilkumar Panda Sounds like a bug...Please open a support ticket

avatar
Master Mentor

@Anilkumar Panda See this ..it may ring a bell

http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/

  • yarn.nodemanager.localizer.cache.cleanup.interval-ms: