<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Problem in YARN (distcp) - sometime gives error message &amp;quot;Operation not permitted&amp;quot; in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Problem-in-YARN-distcp-sometime-gives-error-message-quot/m-p/22905#M42556</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not sure why this error message sometime appeared while doing some map/reduce job including distcp.&lt;/P&gt;&lt;P&gt;I would really appreciate some help to pointing out this problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# below is the error msg.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;14/12/19 07:07:24 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://##########/data/events/video/20141212], targetPath=hdfs://kishimoto/data/events/video, targetPathExists=true, preserveRawXattrs=false}&lt;BR /&gt;14/12/19 07:07:24 INFO client.RMProxy: Connecting to ResourceManager at XXXXXXXXXX/10.66.27.131:8032&lt;BR /&gt;14/12/19 07:07:25 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb&lt;BR /&gt;14/12/19 07:07:25 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor&lt;BR /&gt;14/12/19 07:07:25 INFO client.RMProxy: Connecting to ResourceManager at XXXXXXXXXX/10.66.27.131:8032&lt;BR /&gt;14/12/19 07:07:25 INFO mapreduce.JobSubmitter: number of splits:6&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO impl.YarnClientImpl: Submitted application application_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.Job: The url to track the job: &lt;A href="http://XXXXXXXXXX:8088/proxy/application_1418971058569_0022/" target="_blank"&gt;http://XXXXXXXXXX:8088/proxy/application_1418971058569_0022/&lt;/A&gt;&lt;BR /&gt;14/12/19 07:07:26 INFO tools.DistCp: DistCp job-id: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.Job: Running job: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:41 INFO mapreduce.Job: Job job_1418971058569_0022 running in uber mode : false&lt;BR /&gt;14/12/19 07:07:41 INFO mapreduce.Job: map 0% reduce 0%&lt;BR /&gt;14/12/19 07:07:48 INFO mapreduce.Job: Task Id : attempt_1418971058569_0022_m_000002_0, Status : FAILED&lt;BR /&gt;Application application_1418971058569_0022 initialization failed (exitCode=1) with output: main : command provided 0&lt;BR /&gt;main : user is yarn&lt;BR /&gt;main : requested yarn user is centos&lt;BR /&gt;EPERM: Operation not permitted&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)&lt;BR /&gt;at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:162)&lt;BR /&gt;at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;14/12/19 07:07:52 INFO mapreduce.Job: map 33% reduce 0%&lt;BR /&gt;14/12/19 07:07:53 INFO mapreduce.Job: map 83% reduce 0%&lt;BR /&gt;14/12/19 07:07:56 INFO mapreduce.Job: Task Id : attempt_1418971058569_0022_m_000002_1, Status : FAILED&lt;BR /&gt;Application application_1418971058569_0022 initialization failed (exitCode=1) with output: main : command provided 0&lt;BR /&gt;main : user is yarn&lt;BR /&gt;main : requested yarn user is centos&lt;BR /&gt;EPERM: Operation not permitted&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)&lt;BR /&gt;at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:162)&lt;BR /&gt;at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;14/12/19 07:08:02 INFO mapreduce.Job: map 100% reduce 0%&lt;BR /&gt;14/12/19 07:08:03 INFO mapreduce.Job: Job job_1418971058569_0022 completed successfully&lt;BR /&gt;14/12/19 07:08:03 INFO mapreduce.Job: Counters: 34&lt;BR /&gt;File System Counters&lt;BR /&gt;FILE: Number of bytes read=0&lt;BR /&gt;FILE: Number of bytes written=665190&lt;BR /&gt;FILE: Number of read operations=0&lt;BR /&gt;FILE: Number of large read operations=0&lt;BR /&gt;FILE: Number of write operations=0&lt;BR /&gt;HDFS: Number of bytes read=62556154&lt;BR /&gt;HDFS: Number of bytes written=62552365&lt;BR /&gt;HDFS: Number of read operations=105&lt;BR /&gt;HDFS: Number of large read operations=0&lt;BR /&gt;HDFS: Number of write operations=23&lt;BR /&gt;Job Counters&lt;BR /&gt;Failed map tasks=2&lt;BR /&gt;Launched map tasks=8&lt;BR /&gt;Other local map tasks=8&lt;BR /&gt;Total time spent by all maps in occupied slots (ms)=0&lt;BR /&gt;Total time spent by all reduces in occupied slots (ms)=0&lt;BR /&gt;Total time spent by all map tasks (ms)=43559&lt;BR /&gt;Total vcore-seconds taken by all map tasks=87118&lt;BR /&gt;Total megabyte-seconds taken by all map tasks=44604416&lt;BR /&gt;Map-Reduce Framework&lt;BR /&gt;Map input records=6&lt;BR /&gt;Map output records=0&lt;BR /&gt;Input split bytes=708&lt;BR /&gt;Spilled Records=0&lt;BR /&gt;Failed Shuffles=0&lt;BR /&gt;Merged Map outputs=0&lt;BR /&gt;GC time elapsed (ms)=100&lt;BR /&gt;CPU time spent (ms)=6890&lt;BR /&gt;Physical memory (bytes) snapshot=1396334592&lt;BR /&gt;Virtual memory (bytes) snapshot=10175578112&lt;BR /&gt;Total committed heap usage (bytes)=3029336064&lt;BR /&gt;File Input Format Counters&lt;BR /&gt;Bytes Read=3081&lt;BR /&gt;File Output Format Counters&lt;BR /&gt;Bytes Written=0&lt;BR /&gt;org.apache.hadoop.tools.mapred.CopyMapper$Counter&lt;BR /&gt;BYTESCOPIED=62552365&lt;BR /&gt;BYTESEXPECTED=62552365&lt;BR /&gt;COPY=6&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:16:06 GMT</pubDate>
    <dc:creator>sphinxid</dc:creator>
    <dc:date>2022-09-16T09:16:06Z</dc:date>
    <item>
      <title>Problem in YARN (distcp) - sometime gives error message "Operation not permitted"</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Problem-in-YARN-distcp-sometime-gives-error-message-quot/m-p/22905#M42556</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not sure why this error message sometime appeared while doing some map/reduce job including distcp.&lt;/P&gt;&lt;P&gt;I would really appreciate some help to pointing out this problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# below is the error msg.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;14/12/19 07:07:24 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[hdfs://##########/data/events/video/20141212], targetPath=hdfs://kishimoto/data/events/video, targetPathExists=true, preserveRawXattrs=false}&lt;BR /&gt;14/12/19 07:07:24 INFO client.RMProxy: Connecting to ResourceManager at XXXXXXXXXX/10.66.27.131:8032&lt;BR /&gt;14/12/19 07:07:25 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb&lt;BR /&gt;14/12/19 07:07:25 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor&lt;BR /&gt;14/12/19 07:07:25 INFO client.RMProxy: Connecting to ResourceManager at XXXXXXXXXX/10.66.27.131:8032&lt;BR /&gt;14/12/19 07:07:25 INFO mapreduce.JobSubmitter: number of splits:6&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO impl.YarnClientImpl: Submitted application application_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.Job: The url to track the job: &lt;A href="http://XXXXXXXXXX:8088/proxy/application_1418971058569_0022/" target="_blank"&gt;http://XXXXXXXXXX:8088/proxy/application_1418971058569_0022/&lt;/A&gt;&lt;BR /&gt;14/12/19 07:07:26 INFO tools.DistCp: DistCp job-id: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:26 INFO mapreduce.Job: Running job: job_1418971058569_0022&lt;BR /&gt;14/12/19 07:07:41 INFO mapreduce.Job: Job job_1418971058569_0022 running in uber mode : false&lt;BR /&gt;14/12/19 07:07:41 INFO mapreduce.Job: map 0% reduce 0%&lt;BR /&gt;14/12/19 07:07:48 INFO mapreduce.Job: Task Id : attempt_1418971058569_0022_m_000002_0, Status : FAILED&lt;BR /&gt;Application application_1418971058569_0022 initialization failed (exitCode=1) with output: main : command provided 0&lt;BR /&gt;main : user is yarn&lt;BR /&gt;main : requested yarn user is centos&lt;BR /&gt;EPERM: Operation not permitted&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)&lt;BR /&gt;at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:162)&lt;BR /&gt;at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;14/12/19 07:07:52 INFO mapreduce.Job: map 33% reduce 0%&lt;BR /&gt;14/12/19 07:07:53 INFO mapreduce.Job: map 83% reduce 0%&lt;BR /&gt;14/12/19 07:07:56 INFO mapreduce.Job: Task Id : attempt_1418971058569_0022_m_000002_1, Status : FAILED&lt;BR /&gt;Application application_1418971058569_0022 initialization failed (exitCode=1) with output: main : command provided 0&lt;BR /&gt;main : user is yarn&lt;BR /&gt;main : requested yarn user is centos&lt;BR /&gt;EPERM: Operation not permitted&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)&lt;BR /&gt;at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)&lt;BR /&gt;at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)&lt;BR /&gt;at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:162)&lt;BR /&gt;at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)&lt;BR /&gt;at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createDir(ContainerLocalizer.java:383)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:369)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:129)&lt;BR /&gt;at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:347)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;14/12/19 07:08:02 INFO mapreduce.Job: map 100% reduce 0%&lt;BR /&gt;14/12/19 07:08:03 INFO mapreduce.Job: Job job_1418971058569_0022 completed successfully&lt;BR /&gt;14/12/19 07:08:03 INFO mapreduce.Job: Counters: 34&lt;BR /&gt;File System Counters&lt;BR /&gt;FILE: Number of bytes read=0&lt;BR /&gt;FILE: Number of bytes written=665190&lt;BR /&gt;FILE: Number of read operations=0&lt;BR /&gt;FILE: Number of large read operations=0&lt;BR /&gt;FILE: Number of write operations=0&lt;BR /&gt;HDFS: Number of bytes read=62556154&lt;BR /&gt;HDFS: Number of bytes written=62552365&lt;BR /&gt;HDFS: Number of read operations=105&lt;BR /&gt;HDFS: Number of large read operations=0&lt;BR /&gt;HDFS: Number of write operations=23&lt;BR /&gt;Job Counters&lt;BR /&gt;Failed map tasks=2&lt;BR /&gt;Launched map tasks=8&lt;BR /&gt;Other local map tasks=8&lt;BR /&gt;Total time spent by all maps in occupied slots (ms)=0&lt;BR /&gt;Total time spent by all reduces in occupied slots (ms)=0&lt;BR /&gt;Total time spent by all map tasks (ms)=43559&lt;BR /&gt;Total vcore-seconds taken by all map tasks=87118&lt;BR /&gt;Total megabyte-seconds taken by all map tasks=44604416&lt;BR /&gt;Map-Reduce Framework&lt;BR /&gt;Map input records=6&lt;BR /&gt;Map output records=0&lt;BR /&gt;Input split bytes=708&lt;BR /&gt;Spilled Records=0&lt;BR /&gt;Failed Shuffles=0&lt;BR /&gt;Merged Map outputs=0&lt;BR /&gt;GC time elapsed (ms)=100&lt;BR /&gt;CPU time spent (ms)=6890&lt;BR /&gt;Physical memory (bytes) snapshot=1396334592&lt;BR /&gt;Virtual memory (bytes) snapshot=10175578112&lt;BR /&gt;Total committed heap usage (bytes)=3029336064&lt;BR /&gt;File Input Format Counters&lt;BR /&gt;Bytes Read=3081&lt;BR /&gt;File Output Format Counters&lt;BR /&gt;Bytes Written=0&lt;BR /&gt;org.apache.hadoop.tools.mapred.CopyMapper$Counter&lt;BR /&gt;BYTESCOPIED=62552365&lt;BR /&gt;BYTESEXPECTED=62552365&lt;BR /&gt;COPY=6&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:16:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Problem-in-YARN-distcp-sometime-gives-error-message-quot/m-p/22905#M42556</guid>
      <dc:creator>sphinxid</dc:creator>
      <dc:date>2022-09-16T09:16:06Z</dc:date>
    </item>
    <item>
      <title>Re: Problem in YARN (distcp) - sometime gives error message "Operation not permitted"</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Problem-in-YARN-distcp-sometime-gives-error-message-quot/m-p/22929#M42557</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ok i found out the problem. I use CDH 5.2.1.&lt;/P&gt;&lt;P&gt;In my case, it happened when I use YARN to manage Impala resources. (default config)&lt;/P&gt;&lt;P&gt;Then those&amp;nbsp;error msg appeared.&lt;/P&gt;</description>
      <pubDate>Sat, 20 Dec 2014 06:51:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Problem-in-YARN-distcp-sometime-gives-error-message-quot/m-p/22929#M42557</guid>
      <dc:creator>sphinxid</dc:creator>
      <dc:date>2014-12-20T06:51:18Z</dc:date>
    </item>
  </channel>
</rss>

