<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.) in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361437#M238583</link>
    <description>&lt;P&gt;This error has been reported earlier&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361337#M238564" target="_blank"&gt;https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361337#M238564&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a 3 node hadoop cluster for doing my research on Medical Side Effects&lt;/P&gt;&lt;P&gt;Each node has Ubuntu 18.04.5 LTS.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CDH 6.3.4-1.cdh6.3.4.p0.6751098&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Basically I have not been able to run any MR jobs or Hive queries/jobs&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Hive select * works of course because select * does not launch an MR job...However querying a Hive table with just 2 cols and 2 rows fails&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;hive -e "show create table beatles"&lt;BR /&gt;&lt;BR /&gt;CREATE EXTERNAL TABLE `beatles`(&lt;BR /&gt;`id` int,&lt;BR /&gt;`name` string)&lt;BR /&gt;COMMENT 'Beatles Group'&lt;BR /&gt;ROW FORMAT SERDE&lt;BR /&gt;'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'&lt;BR /&gt;WITH SERDEPROPERTIES (&lt;BR /&gt;'field.delim'=',',&lt;BR /&gt;'line.delim'='\n',&lt;BR /&gt;'serialization.format'=',')&lt;BR /&gt;STORED AS INPUTFORMAT&lt;BR /&gt;'org.apache.hadoop.mapred.TextInputFormat'&lt;BR /&gt;OUTPUTFORMAT&lt;BR /&gt;'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'&lt;BR /&gt;LOCATION&lt;BR /&gt;'hdfs-//hp8300one:8020/data/demo'&lt;BR /&gt;TBLPROPERTIES (&lt;BR /&gt;'transient_lastDdlTime'='1673760903')&lt;BR /&gt;Time taken: 1.711 seconds, Fetched: 18 row(s)&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;&lt;PRE&gt;hive -e "select * from beatles"&lt;BR /&gt;OK&lt;BR /&gt;1 john&lt;BR /&gt;2 paul&lt;BR /&gt;3 george&lt;BR /&gt;4 ringo&lt;BR /&gt;Time taken: 1.983 seconds, Fetched: 4 row(s)&lt;/PRE&gt;&lt;PRE&gt;hive -e "select * from beatles where id &amp;gt; 0"&lt;BR /&gt;WARNING: Use "yarn jar" to launch YARN applications.&lt;BR /&gt;&lt;BR /&gt;Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6751098/jars/hive-common-2.1.1-cdh6.3.4.jar!/hive-log4j2.properties Async: false&lt;BR /&gt;Query ID = sanjay_20230116101202_b81b4798-e6e9-485f-9689-db33f6b313ec&lt;BR /&gt;Total jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks is set to 0 since there's no reduce operator&lt;BR /&gt;23/01/16 10:12:05 INFO client.RMProxy: Connecting to ResourceManager at hp8300one/10.0.0.3:8032&lt;BR /&gt;23/01/16 10:12:05 INFO client.RMProxy: Connecting to ResourceManager at hp8300one/10.0.0.3:8032&lt;BR /&gt;Starting Job = job_1673779576753_0003, Tracking URL = http://hp8300one:8088/proxy/application_1673779576753_0003/&lt;BR /&gt;Kill Command = /opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6751098/lib/hadoop/bin/hadoop job -kill job_1673779576753_0003&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2023-01-16 10:12:10,441 Stage-1 map = 0%, reduce = 0%&lt;BR /&gt;Ended Job = job_1673779576753_0003 with errors&lt;BR /&gt;Error during job, obtaining debugging information...&lt;BR /&gt;FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 HDFS EC Read: 0 FAIL&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error in the Logs&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;2023-01-15 02:47:37,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1673779576753_0001 transitioned from INITING to RUNNING
2023-01-15 02:47:37,467 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from NEW to LOCALIZING
2023-01-15 02:47:37,467 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1673779576753_0001
2023-01-15 02:47:37,476 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public resource: { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null }
2023-01-15 02:47:37,479 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Local path for public localization is not found.  May be disks failed.
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:589)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.addResource(ResourceLocalizationService.java:883)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:781)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:723)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
	at java.lang.Thread.run(Thread.java:750)
2023-01-15 02:47:37,479 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1673779576753_0001_01_000001
2023-01-15 02:47:37,481 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer failed for container_1673779576753_0001_01_000001
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:133)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:117)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:584)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1205)
2023-01-15 02:47:37,481 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from LOCALIZING to LOCALIZATION_FAILED
2023-01-15 02:47:37,482 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl: Container container_1673779576753_0001_01_000001 sent RELEASE event on a resource request { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null } not present in cache.
2023-01-15 02:47:37,482 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sanjay	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: LOCALIZATION_FAILED	APPID=application_1673779576753_0001	CONTAINERID=container_1673779576753_0001_01_000001
2023-01-15 02:47:37,483 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when running task in DeletionService #1
2023-01-15 02:47:37,483 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread DeletionService #1: 
java.lang.NullPointerException: path cannot be null
	at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
	at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:270)
	at org.apache.hadoop.fs.FileContext.delete(FileContext.java:768)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.FileDeletionTask.run(FileDeletionTask.java:109)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
2023-01-15 02:47:37,489 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from LOCALIZATION_FAILED to DONE&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 16 Jan 2023 18:14:47 GMT</pubDate>
    <dc:creator>sanjaysubs</dc:creator>
    <dc:date>2023-01-16T18:14:47Z</dc:date>
    <item>
      <title>org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361437#M238583</link>
      <description>&lt;P&gt;This error has been reported earlier&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361337#M238564" target="_blank"&gt;https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361337#M238564&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a 3 node hadoop cluster for doing my research on Medical Side Effects&lt;/P&gt;&lt;P&gt;Each node has Ubuntu 18.04.5 LTS.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CDH 6.3.4-1.cdh6.3.4.p0.6751098&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Basically I have not been able to run any MR jobs or Hive queries/jobs&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Hive select * works of course because select * does not launch an MR job...However querying a Hive table with just 2 cols and 2 rows fails&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;hive -e "show create table beatles"&lt;BR /&gt;&lt;BR /&gt;CREATE EXTERNAL TABLE `beatles`(&lt;BR /&gt;`id` int,&lt;BR /&gt;`name` string)&lt;BR /&gt;COMMENT 'Beatles Group'&lt;BR /&gt;ROW FORMAT SERDE&lt;BR /&gt;'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'&lt;BR /&gt;WITH SERDEPROPERTIES (&lt;BR /&gt;'field.delim'=',',&lt;BR /&gt;'line.delim'='\n',&lt;BR /&gt;'serialization.format'=',')&lt;BR /&gt;STORED AS INPUTFORMAT&lt;BR /&gt;'org.apache.hadoop.mapred.TextInputFormat'&lt;BR /&gt;OUTPUTFORMAT&lt;BR /&gt;'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'&lt;BR /&gt;LOCATION&lt;BR /&gt;'hdfs-//hp8300one:8020/data/demo'&lt;BR /&gt;TBLPROPERTIES (&lt;BR /&gt;'transient_lastDdlTime'='1673760903')&lt;BR /&gt;Time taken: 1.711 seconds, Fetched: 18 row(s)&lt;BR /&gt;&lt;BR /&gt;&lt;/PRE&gt;&lt;PRE&gt;hive -e "select * from beatles"&lt;BR /&gt;OK&lt;BR /&gt;1 john&lt;BR /&gt;2 paul&lt;BR /&gt;3 george&lt;BR /&gt;4 ringo&lt;BR /&gt;Time taken: 1.983 seconds, Fetched: 4 row(s)&lt;/PRE&gt;&lt;PRE&gt;hive -e "select * from beatles where id &amp;gt; 0"&lt;BR /&gt;WARNING: Use "yarn jar" to launch YARN applications.&lt;BR /&gt;&lt;BR /&gt;Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6751098/jars/hive-common-2.1.1-cdh6.3.4.jar!/hive-log4j2.properties Async: false&lt;BR /&gt;Query ID = sanjay_20230116101202_b81b4798-e6e9-485f-9689-db33f6b313ec&lt;BR /&gt;Total jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks is set to 0 since there's no reduce operator&lt;BR /&gt;23/01/16 10:12:05 INFO client.RMProxy: Connecting to ResourceManager at hp8300one/10.0.0.3:8032&lt;BR /&gt;23/01/16 10:12:05 INFO client.RMProxy: Connecting to ResourceManager at hp8300one/10.0.0.3:8032&lt;BR /&gt;Starting Job = job_1673779576753_0003, Tracking URL = http://hp8300one:8088/proxy/application_1673779576753_0003/&lt;BR /&gt;Kill Command = /opt/cloudera/parcels/CDH-6.3.4-1.cdh6.3.4.p0.6751098/lib/hadoop/bin/hadoop job -kill job_1673779576753_0003&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2023-01-16 10:12:10,441 Stage-1 map = 0%, reduce = 0%&lt;BR /&gt;Ended Job = job_1673779576753_0003 with errors&lt;BR /&gt;Error during job, obtaining debugging information...&lt;BR /&gt;FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 HDFS EC Read: 0 FAIL&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error in the Logs&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;2023-01-15 02:47:37,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1673779576753_0001 transitioned from INITING to RUNNING
2023-01-15 02:47:37,467 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from NEW to LOCALIZING
2023-01-15 02:47:37,467 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1673779576753_0001
2023-01-15 02:47:37,476 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public resource: { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null }
2023-01-15 02:47:37,479 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Local path for public localization is not found.  May be disks failed.
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:589)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.addResource(ResourceLocalizationService.java:883)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:781)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:723)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
	at java.lang.Thread.run(Thread.java:750)
2023-01-15 02:47:37,479 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1673779576753_0001_01_000001
2023-01-15 02:47:37,481 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer failed for container_1673779576753_0001_01_000001
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:133)
	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:117)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:584)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1205)
2023-01-15 02:47:37,481 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from LOCALIZING to LOCALIZATION_FAILED
2023-01-15 02:47:37,482 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl: Container container_1673779576753_0001_01_000001 sent RELEASE event on a resource request { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null } not present in cache.
2023-01-15 02:47:37,482 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sanjay	OPERATION=Container Finished - Failed	TARGET=ContainerImpl	RESULT=FAILURE	DESCRIPTION=Container failed with state: LOCALIZATION_FAILED	APPID=application_1673779576753_0001	CONTAINERID=container_1673779576753_0001_01_000001
2023-01-15 02:47:37,483 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when running task in DeletionService #1
2023-01-15 02:47:37,483 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread DeletionService #1: 
java.lang.NullPointerException: path cannot be null
	at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
	at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:270)
	at org.apache.hadoop.fs.FileContext.delete(FileContext.java:768)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.FileDeletionTask.run(FileDeletionTask.java:109)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
2023-01-15 02:47:37,489 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1673779576753_0001_01_000001 transitioned from LOCALIZATION_FAILED to DONE&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jan 2023 18:14:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361437#M238583</guid>
      <dc:creator>sanjaysubs</dc:creator>
      <dc:date>2023-01-16T18:14:47Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361482#M238605</link>
      <description>&lt;P class="commentBody"&gt;The spark/yarn/hive jobs uses local directories for localization purpose. The required data which is used by the application is stored on local directories when the application is in execution.&lt;/P&gt;&lt;P class="commentBody"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="commentBody"&gt;Please manually delete the older data from local /&lt;STRONG&gt;tmp&lt;/STRONG&gt; directory. Also,&lt;/P&gt;&lt;P class="commentBody"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="commentBody"&gt;Please follow the steps mentioned in following article to clear the cache memory from local directory : &lt;A href="https://community.cloudera.com/t5/Community-Articles/How-to-clear-local-file-cache-and-user-cache-for-yarn/ta-p/245160" target="_blank" rel="noopener"&gt;https://community.cloudera.com/t5/Community-Articles/How-to-clear-local-file-cache-and-user-cache-for-yarn/ta-p/245160&lt;/A&gt;&lt;/P&gt;&lt;P class="commentBody"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="commentBody"&gt;Please manually delete the unwanted data from local /&lt;STRONG&gt;tmp&lt;/STRONG&gt; directory and also follow the above article&lt;/P&gt;&lt;P class="commentBody"&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="commentBody"&gt;&lt;BR /&gt;&lt;FONT color="#FF6600"&gt;If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2023 07:50:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361482#M238605</guid>
      <dc:creator>Kartik_Agarwal</dc:creator>
      <dc:date>2023-01-17T07:50:29Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361560#M238621</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/75395"&gt;@Kartik_Agarwal&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Unfortunately this does not solve the issue. I just request a very clear step by step explanation of where I need to specify all the variables needed to run a MR or a Hive job. I have been using CDH since 2011 but this is the first time I cannot even run a Wordcount program successfully !!! Very disappointed and discouraged.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When I go to the Configuration file for the job that failed from the WebUI it shows me this for the property&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;property&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;name&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt;yarn.nodemanager.local-dirs&lt;/SPAN&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;/name&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;value&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt;${hadoop.tmp.dir}/nm-local-dir&lt;/SPAN&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;/value&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;final&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt;false&lt;/SPAN&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;/final&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;source&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt;yarn-default.xml&lt;/SPAN&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;/source&amp;gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN class="html-tag"&gt;&amp;lt;/property&amp;gt;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However in Cloudera Manager the values are differently shown&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2023-01-17 at 10.47.07 AM.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/36625iC0106ACE49E68BFD/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screen Shot 2023-01-17 at 10.47.07 AM.png" alt="Screen Shot 2023-01-17 at 10.47.07 AM.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also the local directories specified by&amp;nbsp;"yarn.nodemanager.local-dirs" are empty&lt;/P&gt;&lt;PRE&gt;ls -latr /media/sanjay/hdd0[23]/yarn/nm&lt;BR /&gt;/media/sanjay/hdd03/yarn/nm:&lt;BR /&gt;total 8&lt;BR /&gt;drwxrwxrwx 3 yarn hadoop 4096 Jan 2 11:25 ..&lt;BR /&gt;drwxr-xr-x 2 yarn hadoop 4096 Jan 2 11:25 .&lt;BR /&gt;&lt;BR /&gt;/media/sanjay/hdd02/yarn/nm:&lt;BR /&gt;total 8&lt;BR /&gt;drwxrwxrwx 3 yarn hadoop 4096 Jan 2 11:25 ..&lt;BR /&gt;drwxr-xr-x 2 yarn hadoop 4096 Jan 2 11:25 .&lt;BR /&gt;&lt;BR /&gt; &lt;/PRE&gt;&lt;P&gt;My "hadoop.tmp.dir" is defined here&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2023-01-17 at 10.50.02 AM.png" style="width: 999px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/36626i2B00708AFAFD853E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screen Shot 2023-01-17 at 10.50.02 AM.png" alt="Screen Shot 2023-01-17 at 10.50.02 AM.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2023 18:53:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/361560#M238621</guid>
      <dc:creator>sanjaysubs</dc:creator>
      <dc:date>2023-01-17T18:53:00Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/367292#M239855</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/75395"&gt;@Kartik_Agarwal&lt;/a&gt;'s suggestion works from my side.&lt;/P&gt;&lt;P&gt;i have the problem like yours when i trying to insert one data in a table.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="jjjjanine_0-1680174447386.png" style="width: 400px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/37126iE5C1D98A206825C1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="jjjjanine_0-1680174447386.png" alt="jjjjanine_0-1680174447386.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;i removed the /data/tmp/hadoop/mapred directory from all the nodes and restart the service, then it works.&lt;/P&gt;</description>
      <pubDate>Thu, 30 Mar 2023 11:09:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/367292#M239855</guid>
      <dc:creator>jjjjanine</dc:creator>
      <dc:date>2023-03-30T11:09:32Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.hadoop.util.DiskChecker$DiskErrorException(No space available in any of the local directories.)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/367293#M239856</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/104323"&gt;@jjjjanine&lt;/a&gt;&amp;nbsp;Thanks for providing your valuable inputs.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Mar 2023 11:11:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/org-apache-hadoop-util-DiskChecker-DiskErrorException-No/m-p/367293#M239856</guid>
      <dc:creator>Kartik_Agarwal</dc:creator>
      <dc:date>2023-03-30T11:11:36Z</dc:date>
    </item>
  </channel>
</rss>

