<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can. in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21035#M3455</link>
    <description>&lt;P&gt;i have digged more deep today, and find something.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;please have look below information:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[root@datanode03 usercache]# pwd&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;/yarn/nm/usercache&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[root@datanode03 usercache]# ls&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;hive hue jlwang test&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;we can see there are four directories in /yarn/nm/usercache, these users can execute&amp;nbsp;example mapreduce or sqoop successful. as i have said, HDFS, YARN this kind &amp;nbsp;system user can't invoke container, at the beginning, i think it's about user id is below 1000, but &amp;nbsp;i check the setting many times, it's no problems. because i have set&amp;nbsp;min.user.id = 0 and add hdfs, mapred, yarn to allow user list.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at this moment,&amp;nbsp;i assume maybe it's because there are no hdfs and yarn directory in /yarn/nm/usercache, basiclly,&amp;nbsp;every user should create their own directory in the /yarn/nm/usercache when run map reduce job, but HDFS and YARN didn't create this directory, why ???&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;test begin(confirm when new user execute map reduce job , it should create directory in usercache.):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) &amp;nbsp;create new user, the user name is iamfromsky, the user id is below 1000.&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;useradd -u 600 iamfromsky&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;2) &amp;nbsp;addprinc iamfromsky@DDS.COM&lt;/P&gt;&lt;P&gt;3) &amp;nbsp;login LINUX system by iamfromsky, and execute map reduce job.&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[iamfromsky@datanode03 hadoop-0.20-mapreduce]$ hadoop jar hadoop-examples.jar pi 10 10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;Number of Maps = 10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Samples per Map = 10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #2&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #3&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #4&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #5&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #6&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #7&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #8&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #9&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Starting Job&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO client.RMProxy: Connecting to ResourceManager at namenode01.hadoop/10.32.87.9:8032&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 91 for iamfromsky on ha-hdfs:cluster&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO security.TokenCache: Got dt for hdfs://cluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 91 for iamfromsky)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO input.FileInputFormat: Total input paths to process : 10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: number of splits:10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 91 for iamfromsky)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:03 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:05 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:07 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:09 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO impl.YarnClientImpl: Submitted application application_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO mapreduce.Job: The url to track the job: &lt;A href="http://namenode01.hadoop:8088/proxy/application_1414638687299_0013/" target="_blank"&gt;http://namenode01.hadoop:8088/proxy/application_1414638687299_0013/&lt;/A&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO mapreduce.Job: Running job: job_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:55 INFO mapreduce.Job: Job job_1414638687299_0013 running in uber mode : false&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:55 INFO mapreduce.Job: map 0% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:05 INFO mapreduce.Job: map 30% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:24 INFO mapreduce.Job: map 50% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:34 INFO mapreduce.Job: map 70% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:36 INFO mapreduce.Job: map 100% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: map 100% reduce 100%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: Job job_1414638687299_0013 completed successfully&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: Counters: 49&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File System Counters&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of bytes read=89&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of bytes written=1204297&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of large read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of write operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of bytes read=2630&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of bytes written=215&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of read operations=43&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of large read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of write operations=3&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Job Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Launched map tasks=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Launched reduce tasks=1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Data-local map tasks=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all maps in occupied slots (ms)=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all reduces in occupied slots (ms)=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all map tasks (ms)=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all reduce tasks (ms)=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total vcore-seconds taken by all map tasks=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total vcore-seconds taken by all reduce tasks=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total megabyte-seconds taken by all map tasks=270460928&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total megabyte-seconds taken by all reduce tasks=3300352&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map-Reduce Framework&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map input records=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output records=20&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output bytes=180&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output materialized bytes=339&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Input split bytes=1450&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Combine input records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Combine output records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce input groups=2&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce shuffle bytes=339&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce input records=20&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce output records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Spilled Records=40&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Shuffled Maps =10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Failed Shuffles=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Merged Map outputs=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;GC time elapsed (ms)=423&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;CPU time spent (ms)=6450&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Physical memory (bytes) snapshot=4420325376&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Virtual memory (bytes) snapshot=16857763840&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total committed heap usage (bytes)=4029153280&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Shuffle Errors&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;BAD_ID=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;CONNECTION=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;IO_ERROR=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_LENGTH=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_MAP=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_REDUCE=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File Input Format Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Bytes Read=1180&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File Output Format Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Bytes Written=97&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Job Finished in 104.169 seconds&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Estimated value of Pi is 3.20000000000000000000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;4) check usercache directory&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[yarn@datanode03 usercache]$ cd /yarn/nm/usercache/&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;[yarn@datanode03 usercache]$ ls&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;&amp;nbsp;hive hue iamfromsky jlwang test&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;we can see, the iamfromsky directory has been created. &amp;nbsp;then i am going to create hdfs directory manually, and try again, mission failed.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;so i don't think the root cause is hdfs can't create directory, i think there are some reasons to cause can't create hdfs directory.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;who can give me some advises ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 30 Oct 2014 06:29:44 GMT</pubDate>
    <dc:creator>iamfromsky</dc:creator>
    <dc:date>2014-10-30T06:29:44Z</dc:date>
    <item>
      <title>Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21008#M3453</link>
      <description>&lt;P&gt;everybody, and Cloudera support:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have catched an issue so strange to me. i am not sure there are some parameter to resolve this &amp;nbsp;or not. please be patient, since this will be a long story. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;system user like HDFS, YARN, MAPRED, HUE can't invoke container. &amp;nbsp;but other users which i created manually can.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My env is: &amp;nbsp;CDH 5.2(the latest version) + Kerberos + Sentry + &amp;nbsp;OPENLDAP&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yesteady, i was going to create workflow to import data to hive from MySQL by oozie, the Sqoop job is :&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;sqoop import -connect jdbc:mysql://10.32.87.4:3306/xxxx &amp;nbsp;-username admin -password xxxxxxxx &amp;nbsp;-table t_phone -hive-table t_phone -hive-database xxxx &amp;nbsp;-hive-import -hive-overwrite &amp;nbsp; -hive-drop-import-delims -m 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;but this job is failed. &amp;nbsp;the errors like below:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #ff0000;"&gt;INFO mapreduce.Job: Job job_1414579088733_0016 failed with state FAILED due to: Application application_1414579088733_0016 failed 2 times due to AM Container for appattempt_1414579088733_0016_000002 exited with exitCode:&lt;SPAN style="color: #333333;"&gt; -1000 &lt;/SPAN&gt;due to: Application application_1414579088733_0016 initialization failed (exitCode=139) with output: &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #ff0000;"&gt;.Failing this attempt.. Failing the application.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have no any doubt about this sqoop job, since this is ok in our PRD env (our PRD env is CDH5.1). &amp;nbsp;then i was going to OS level using hdfs to issue this sqoop script, it's failed too, the error is the same as above. then i am going to seach google, just find a few issues like me, but these error code is 1 or others , the&amp;nbsp;solution is set HADOOP_YARN_HOME or HADOOP_MAPRED_HOME. &amp;nbsp;so i was going to try set HADOOP_YARN_HOME or HADOOP_MAPRED_HOME, and try again, mission failed too.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at that moment, i assumed maybe this is file or directory permission issue(since i have encountered this kind issue ago)&lt;/P&gt;&lt;P&gt;then i was going to delete /tmp, /user/history, /var/log etc.. &amp;nbsp;restart all cluster. try again, &amp;nbsp;mission failed too too.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ok, i have no any idea, then go back to home to cook dinner, and watch moive, enjoy music, &amp;nbsp;have a good sleep.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;today morning, i don't try sqoop anymore, since i have no any confidence, &amp;nbsp;i am going to test example mapreduce,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;the command is : &amp;nbsp;hadoop jar hadoop-examples.jar &amp;nbsp;pi 10 10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;mission failed. the errors is the same. &amp;nbsp;as you can see, the error include some message like :&lt;SPAN&gt;exited with exitCode:&lt;/SPAN&gt;&lt;SPAN style="color: #333333;"&gt; -1000.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="color: #333333;"&gt;when i see 1000, i remember there&amp;nbsp;is a setting in YARN means if user id is below 1000, this user can't invoke container in default. &amp;nbsp;we should set 1000 to 0 or add user id below 1000 to allow user list. &amp;nbsp;then i am going to check these setting, everything is ok,&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="color: #333333;"&gt;why ? why ? why ? &amp;nbsp;i ask myself for many times, but no answer. &amp;nbsp;but i believe this 1000 has connection to that 1000.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;test begin:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="color: #333333;"&gt;i create a user with my name, the user id is 1500, execute sqoop scripts, it's successful. &amp;nbsp; i believe my assumption more stronger, since my owner user has executed import data by sqoop successful.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="color: #333333;"&gt;then i am going to create another is test, the user id is 999, import data done.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #000000;"&gt;&lt;SPAN style="color: #333333;"&gt;and try example mapreduce, &amp;nbsp; SUCCESSFUL...&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;so i am going back to use hdfs try sqoop and mapreduce, it's failed. then try yarn, hue, mission failed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then i think all the system user can't invoke container, but other users can do it, no mantter what the user id &amp;nbsp;it is .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;later, i open&amp;nbsp;&lt;A href="http://10.32.87.9:8088/cluster/nodes" target="_blank"&gt;http://10.32.87.9:8088/cluster/nodes&lt;/A&gt;&amp;nbsp;and&amp;nbsp;&lt;A href="http://10.32.87.49:8042/node/allContainers" target="_blank"&gt;http://10.32.87.49:8042/node/allContainers&lt;/A&gt;&amp;nbsp; to monitor container activities. &amp;nbsp;if the user is my owner user, the container can be invoked and run normally, but if the user is hds or other system user, the container can't be invoke(since no container in running state)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;just show you an example, please look carefully on the highlight words, this is LINUX user.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[&lt;SPAN style="color: #ff0000;"&gt;hdfs&lt;/SPAN&gt;@datanode01 hadoop-0.20-mapreduce]$ hadoop jar hadoop-examples.jar pi 10 10&lt;/P&gt;&lt;P&gt;Number of Maps = 10&lt;BR /&gt;Samples per Map = 10&lt;/P&gt;&lt;P&gt;Wrote input for Map #0&lt;BR /&gt;Wrote input for Map #1&lt;BR /&gt;Wrote input for Map #2&lt;BR /&gt;Wrote input for Map #3&lt;BR /&gt;Wrote input for Map #4&lt;BR /&gt;Wrote input for Map #5&lt;BR /&gt;Wrote input for Map #6&lt;BR /&gt;Wrote input for Map #7&lt;BR /&gt;Wrote input for Map #8&lt;BR /&gt;Wrote input for Map #9&lt;BR /&gt;Starting Job&lt;BR /&gt;14/10/29 22:23:25 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 65 for hdfs on ha-hdfs:cluster&lt;BR /&gt;14/10/29 22:23:25 INFO security.TokenCache: Got dt for hdfs://cluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 65 for hdfs)&lt;BR /&gt;14/10/29 22:23:25 INFO input.FileInputFormat: Total input paths to process : 10&lt;BR /&gt;14/10/29 22:23:25 INFO mapreduce.JobSubmitter: number of splits:10&lt;BR /&gt;14/10/29 22:23:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414579088733_0010&lt;BR /&gt;14/10/29 22:23:25 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 65 for hdfs)&lt;BR /&gt;14/10/29 22:23:27 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0010 is still in NEW&lt;BR /&gt;14/10/29 22:23:29 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0010 is still in NEW&lt;BR /&gt;14/10/29 22:23:31 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0010 is still in NEW&lt;BR /&gt;14/10/29 22:23:33 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0010 is still in NEW&lt;BR /&gt;14/10/29 22:23:35 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0010 is still in NEW&lt;BR /&gt;14/10/29 22:23:36 INFO impl.YarnClientImpl: Submitted application application_1414579088733_0010&lt;BR /&gt;14/10/29 22:23:36 INFO mapreduce.Job: The url to track the job: &lt;A href="http://namenode01.hadoop:8088/proxy/application_1414579088733_0010/" target="_blank"&gt;http://namenode01.hadoop:8088/proxy/application_1414579088733_0010/&lt;/A&gt;&lt;BR /&gt;14/10/29 22:23:36 INFO mapreduce.Job: Running job: job_1414579088733_0010&lt;BR /&gt;14/10/29 22:24:00 INFO mapreduce.Job: Job job_1414579088733_0010 running in uber mode : false&lt;BR /&gt;14/10/29 22:24:00 INFO mapreduce.Job: map 0% reduce 0%&lt;BR /&gt;14/10/29 22:24:00 INFO mapreduce.Job: Job job_1414579088733_0010 failed with state FAILED due to: Application application_1414579088733_0010 failed 2 times due to AM Container for appattempt_1414579088733_0010_000002 exited with exitCode: -1000 due to: Application application_1414579088733_0010 initialization failed (exitCode=139) with output:&lt;BR /&gt;.Failing this attempt.. Failing the application.&lt;BR /&gt;14/10/29 22:24:00 INFO mapreduce.Job: Counters: 0&lt;BR /&gt;Job Finished in 35.389 seconds&lt;BR /&gt;java.io.FileNotFoundException: File does not exist: hdfs://cluster/user/hdfs/QuasiMonteCarlo_1414592602966_1277483233/out/reduce-out&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1083)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1749)&lt;BR /&gt;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1773)&lt;BR /&gt;at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)&lt;BR /&gt;at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)&lt;BR /&gt;at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)&lt;BR /&gt;at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:606)&lt;BR /&gt;at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)&lt;BR /&gt;at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)&lt;BR /&gt;at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:606)&lt;BR /&gt;at org.apache.hadoop.util.RunJar.main(RunJar.java:212)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[&lt;SPAN style="color: #ff0000;"&gt;test&lt;/SPAN&gt;@datanode01 hadoop-0.20-mapreduce]$ hadoop jar hadoop-examples.jar pi 10 10&lt;BR /&gt;Number of Maps = 10&lt;BR /&gt;Samples per Map = 10&lt;BR /&gt;Wrote input for Map #0&lt;BR /&gt;Wrote input for Map #1&lt;BR /&gt;Wrote input for Map #2&lt;BR /&gt;Wrote input for Map #3&lt;BR /&gt;Wrote input for Map #4&lt;BR /&gt;Wrote input for Map #5&lt;BR /&gt;Wrote input for Map #6&lt;BR /&gt;Wrote input for Map #7&lt;BR /&gt;Wrote input for Map #8&lt;BR /&gt;Wrote input for Map #9&lt;BR /&gt;Starting Job&lt;BR /&gt;14/10/29 22:29:45 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 66 for test on ha-hdfs:cluster&lt;BR /&gt;14/10/29 22:29:45 INFO security.TokenCache: Got dt for hdfs://cluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 66 for test)&lt;BR /&gt;14/10/29 22:29:45 INFO input.FileInputFormat: Total input paths to process : 10&lt;BR /&gt;14/10/29 22:29:45 INFO mapreduce.JobSubmitter: number of splits:10&lt;BR /&gt;14/10/29 22:29:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414579088733_0011&lt;BR /&gt;14/10/29 22:29:45 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 66 for test)&lt;BR /&gt;14/10/29 22:29:47 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0011 is still in NEW&lt;BR /&gt;14/10/29 22:29:49 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0011 is still in NEW&lt;BR /&gt;14/10/29 22:29:51 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0011 is still in NEW&lt;BR /&gt;14/10/29 22:29:53 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0011 is still in NEW&lt;BR /&gt;14/10/29 22:29:55 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414579088733_0011 is still in NEW&lt;BR /&gt;14/10/29 22:29:56 INFO impl.YarnClientImpl: Submitted application application_1414579088733_0011&lt;BR /&gt;14/10/29 22:29:56 INFO mapreduce.Job: The url to track the job: &lt;A href="http://namenode01.hadoop:8088/proxy/application_1414579088733_0011/" target="_blank"&gt;http://namenode01.hadoop:8088/proxy/application_1414579088733_0011/&lt;/A&gt;&lt;BR /&gt;14/10/29 22:29:56 INFO mapreduce.Job: Running job: job_1414579088733_0011&lt;BR /&gt;14/10/29 22:30:40 INFO mapreduce.Job: Job job_1414579088733_0011 running in uber mode : false&lt;BR /&gt;14/10/29 22:30:40 INFO mapreduce.Job: map 0% reduce 0%&lt;BR /&gt;14/10/29 22:30:50 INFO mapreduce.Job: map 30% reduce 0%&lt;BR /&gt;14/10/29 22:31:09 INFO mapreduce.Job: map 50% reduce 0%&lt;BR /&gt;14/10/29 22:31:18 INFO mapreduce.Job: map 70% reduce 0%&lt;BR /&gt;14/10/29 22:31:21 INFO mapreduce.Job: map 100% reduce 0%&lt;BR /&gt;14/10/29 22:31:30 INFO mapreduce.Job: map 100% reduce 100%&lt;BR /&gt;14/10/29 22:31:30 INFO mapreduce.Job: Job job_1414579088733_0011 completed successfully&lt;BR /&gt;14/10/29 22:31:30 INFO mapreduce.Job: Counters: 50&lt;BR /&gt;File System Counters&lt;BR /&gt;FILE: Number of bytes read=92&lt;BR /&gt;FILE: Number of bytes written=1235676&lt;BR /&gt;FILE: Number of read operations=0&lt;BR /&gt;FILE: Number of large read operations=0&lt;BR /&gt;FILE: Number of write operations=0&lt;BR /&gt;HDFS: Number of bytes read=2570&lt;BR /&gt;HDFS: Number of bytes written=215&lt;BR /&gt;HDFS: Number of read operations=43&lt;BR /&gt;HDFS: Number of large read operations=0&lt;BR /&gt;HDFS: Number of write operations=3&lt;BR /&gt;Job Counters&lt;BR /&gt;Launched map tasks=10&lt;BR /&gt;Launched reduce tasks=1&lt;BR /&gt;Data-local map tasks=9&lt;BR /&gt;Rack-local map tasks=1&lt;BR /&gt;Total time spent by all maps in occupied slots (ms)=268763&lt;BR /&gt;Total time spent by all reduces in occupied slots (ms)=3396&lt;BR /&gt;Total time spent by all map tasks (ms)=268763&lt;BR /&gt;Total time spent by all reduce tasks (ms)=3396&lt;BR /&gt;Total vcore-seconds taken by all map tasks=268763&lt;BR /&gt;Total vcore-seconds taken by all reduce tasks=3396&lt;BR /&gt;Total megabyte-seconds taken by all map tasks=275213312&lt;BR /&gt;Total megabyte-seconds taken by all reduce tasks=3477504&lt;BR /&gt;Map-Reduce Framework&lt;BR /&gt;Map input records=10&lt;BR /&gt;Map output records=20&lt;BR /&gt;Map output bytes=180&lt;BR /&gt;Map output materialized bytes=339&lt;BR /&gt;Input split bytes=1390&lt;BR /&gt;Combine input records=0&lt;BR /&gt;Combine output records=0&lt;BR /&gt;Reduce input groups=2&lt;BR /&gt;Reduce shuffle bytes=339&lt;BR /&gt;Reduce input records=20&lt;BR /&gt;Reduce output records=0&lt;BR /&gt;Spilled Records=40&lt;BR /&gt;Shuffled Maps =10&lt;BR /&gt;Failed Shuffles=0&lt;BR /&gt;Merged Map outputs=10&lt;BR /&gt;GC time elapsed (ms)=423&lt;BR /&gt;CPU time spent (ms)=7420&lt;BR /&gt;Physical memory (bytes) snapshot=4415447040&lt;BR /&gt;Virtual memory (bytes) snapshot=16896184320&lt;BR /&gt;Total committed heap usage (bytes)=4080009216&lt;BR /&gt;Shuffle Errors&lt;BR /&gt;BAD_ID=0&lt;BR /&gt;CONNECTION=0&lt;BR /&gt;IO_ERROR=0&lt;BR /&gt;WRONG_LENGTH=0&lt;BR /&gt;WRONG_MAP=0&lt;BR /&gt;WRONG_REDUCE=0&lt;BR /&gt;File Input Format Counters&lt;BR /&gt;Bytes Read=1180&lt;BR /&gt;File Output Format Counters&lt;BR /&gt;Bytes Written=97&lt;BR /&gt;Job Finished in 105.199 seconds&lt;BR /&gt;Estimated value of Pi is 3.20000000000000000000&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;could you give me some suggestion, how to fix this issue? &amp;nbsp; i have opened an message , the link is :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/t5/Batch-Processing-and-Workflow/example-Mapreduce-FAILED-after-upgrade-from-5-1-to-5-2/m-p/20993#U20993" target="_blank"&gt;http://community.cloudera.com/t5/Batch-Processing-and-Workflow/example-Mapreduce-FAILED-after-upgrade-from-5-1-to-5-2/m-p/20993#U20993&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;please ignore this link, if anybody has idea, please paster your solution in here, &amp;nbsp;thanks very much.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 14:00:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21008#M3453</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2026-04-21T14:00:22Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21010#M3454</link>
      <description>&lt;P&gt;forgot to say, &amp;nbsp;i can select in HIVE or Impala normally by HUE, since this is also mapreduce, but it's normal, that's why i said the issue is strange.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Oct 2014 16:05:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21010#M3454</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-10-29T16:05:27Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21035#M3455</link>
      <description>&lt;P&gt;i have digged more deep today, and find something.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;please have look below information:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[root@datanode03 usercache]# pwd&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;/yarn/nm/usercache&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[root@datanode03 usercache]# ls&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;hive hue jlwang test&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;we can see there are four directories in /yarn/nm/usercache, these users can execute&amp;nbsp;example mapreduce or sqoop successful. as i have said, HDFS, YARN this kind &amp;nbsp;system user can't invoke container, at the beginning, i think it's about user id is below 1000, but &amp;nbsp;i check the setting many times, it's no problems. because i have set&amp;nbsp;min.user.id = 0 and add hdfs, mapred, yarn to allow user list.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at this moment,&amp;nbsp;i assume maybe it's because there are no hdfs and yarn directory in /yarn/nm/usercache, basiclly,&amp;nbsp;every user should create their own directory in the /yarn/nm/usercache when run map reduce job, but HDFS and YARN didn't create this directory, why ???&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;test begin(confirm when new user execute map reduce job , it should create directory in usercache.):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) &amp;nbsp;create new user, the user name is iamfromsky, the user id is below 1000.&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;useradd -u 600 iamfromsky&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;2) &amp;nbsp;addprinc iamfromsky@DDS.COM&lt;/P&gt;&lt;P&gt;3) &amp;nbsp;login LINUX system by iamfromsky, and execute map reduce job.&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[iamfromsky@datanode03 hadoop-0.20-mapreduce]$ hadoop jar hadoop-examples.jar pi 10 10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;Number of Maps = 10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Samples per Map = 10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #2&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #3&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #4&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #5&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #6&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #7&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #8&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Wrote input for Map #9&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Starting Job&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO client.RMProxy: Connecting to ResourceManager at namenode01.hadoop/10.32.87.9:8032&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 91 for iamfromsky on ha-hdfs:cluster&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO security.TokenCache: Got dt for hdfs://cluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 91 for iamfromsky)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO input.FileInputFormat: Total input paths to process : 10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: number of splits:10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:00 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:cluster, Ident: (HDFS_DELEGATION_TOKEN token 91 for iamfromsky)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:03 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:05 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:07 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:09 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1414638687299_0013 is still in NEW&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO impl.YarnClientImpl: Submitted application application_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO mapreduce.Job: The url to track the job: &lt;A href="http://namenode01.hadoop:8088/proxy/application_1414638687299_0013/" target="_blank"&gt;http://namenode01.hadoop:8088/proxy/application_1414638687299_0013/&lt;/A&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:11 INFO mapreduce.Job: Running job: job_1414638687299_0013&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:55 INFO mapreduce.Job: Job job_1414638687299_0013 running in uber mode : false&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:10:55 INFO mapreduce.Job: map 0% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:05 INFO mapreduce.Job: map 30% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:24 INFO mapreduce.Job: map 50% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:34 INFO mapreduce.Job: map 70% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:36 INFO mapreduce.Job: map 100% reduce 0%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: map 100% reduce 100%&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: Job job_1414638687299_0013 completed successfully&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;14/10/30 14:11:44 INFO mapreduce.Job: Counters: 49&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File System Counters&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of bytes read=89&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of bytes written=1204297&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of large read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;FILE: Number of write operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of bytes read=2630&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of bytes written=215&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of read operations=43&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of large read operations=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;HDFS: Number of write operations=3&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Job Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Launched map tasks=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Launched reduce tasks=1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Data-local map tasks=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all maps in occupied slots (ms)=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all reduces in occupied slots (ms)=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all map tasks (ms)=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total time spent by all reduce tasks (ms)=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total vcore-seconds taken by all map tasks=264122&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total vcore-seconds taken by all reduce tasks=3223&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total megabyte-seconds taken by all map tasks=270460928&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total megabyte-seconds taken by all reduce tasks=3300352&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map-Reduce Framework&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map input records=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output records=20&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output bytes=180&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Map output materialized bytes=339&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Input split bytes=1450&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Combine input records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Combine output records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce input groups=2&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce shuffle bytes=339&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce input records=20&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Reduce output records=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Spilled Records=40&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Shuffled Maps =10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Failed Shuffles=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Merged Map outputs=10&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;GC time elapsed (ms)=423&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;CPU time spent (ms)=6450&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Physical memory (bytes) snapshot=4420325376&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Virtual memory (bytes) snapshot=16857763840&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Total committed heap usage (bytes)=4029153280&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Shuffle Errors&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;BAD_ID=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;CONNECTION=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;IO_ERROR=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_LENGTH=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_MAP=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;WRONG_REDUCE=0&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File Input Format Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Bytes Read=1180&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;File Output Format Counters &lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Bytes Written=97&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Job Finished in 104.169 seconds&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;Estimated value of Pi is 3.20000000000000000000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;4) check usercache directory&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;[yarn@datanode03 usercache]$ cd /yarn/nm/usercache/&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN style="color: #0000ff;"&gt;[yarn@datanode03 usercache]$ ls&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #0000ff;"&gt;&amp;nbsp;hive hue iamfromsky jlwang test&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;we can see, the iamfromsky directory has been created. &amp;nbsp;then i am going to create hdfs directory manually, and try again, mission failed.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;so i don't think the root cause is hdfs can't create directory, i think there are some reasons to cause can't create hdfs directory.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;who can give me some advises ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Oct 2014 06:29:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21035#M3455</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-10-30T06:29:44Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21061#M3456</link>
      <description>Who can give advises? This issue has annoyed me so much.</description>
      <pubDate>Thu, 30 Oct 2014 17:27:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21061#M3456</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-10-30T17:27:20Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21066#M3457</link>
      <description>&amp;gt; why ? why ? why ? i ask myself for many times, but no answer. but i believe this 1000 has connection to that 1000.&lt;BR /&gt;&lt;BR /&gt;No. One is the default min user id, another one is an exit code. They happen to have the same numeric value. But no relationship.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; then i am going to create hdfs directory manually&lt;BR /&gt;&lt;BR /&gt;You're not supposed to mess with /yarn/nm/usercache.&lt;BR /&gt;&lt;BR /&gt;---&lt;BR /&gt;&lt;BR /&gt;First of all, why do you want to run a job as user `hdfs' or `yarn'? And are you using Cloudera Manager?&lt;BR /&gt;&lt;BR /&gt;Let's say you have a legit reason to use `hdfs' user. Did you restart YARN after modifying container-executor.cfg? What is the container log output on such a fail launch?</description>
      <pubDate>Thu, 30 Oct 2014 17:55:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21066#M3457</guid>
      <dc:creator>bcwalrus</dc:creator>
      <dc:date>2014-10-30T17:55:42Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21126#M3458</link>
      <description>&lt;P&gt;as i said, it can't invoke Container, so no any container log.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 02 Nov 2014 10:14:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21126#M3458</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-11-02T10:14:05Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21192#M3459</link>
      <description>&lt;P&gt;Please refer to the following page as part of the Kerberos setup:&amp;nbsp; &lt;A href="http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_sg_s7_prepare_cluster.html" target="_blank"&gt;http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_sg_s7_prepare_cluster.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;By default, the &lt;EM&gt;mapred&lt;/EM&gt;, &lt;EM&gt;hdfs&lt;/EM&gt;, and &lt;EM&gt;bin&amp;nbsp;&lt;/EM&gt;user&amp;nbsp;accounts are&amp;nbsp;kept from submitting and executing jobs.&lt;/P&gt;</description>
      <pubDate>Tue, 04 Nov 2014 21:39:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21192#M3459</guid>
      <dc:creator>CZezula</dc:creator>
      <dc:date>2014-11-04T21:39:18Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudera Support, Very strange issue: system user can't invoke Container, other users can.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21225#M3460</link>
      <description>&lt;P&gt;i have got resolve this issue, true be told, about hdfs, yarn or mapred, i know it's kept from submitting jobs in default, but i think you also know, min.user.id and allow user list are for this case, so the issue is not about user or job.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have montiored many times, when just 1 container start, it's dead automaticlly after secs, but when the normal state, basicly it will invoke 3-4 containers in my env. &amp;nbsp;so i can sure this issue is about cotainer can't work normal.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;but why ? &amp;nbsp;as i said it's just one container has been start normally, so i can check this container log, but can't find nothing, the errors like what i have shown in the above. &amp;nbsp;and i also explain when the sqoop execute normally, it will create a directory in the usercache directory, but when sqoop job failed, it won't, so i guess maybe this directory has some problems, but of course, i don't know the exact reason.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then i delete namenode HA, just leave one namenode and one secondary namenode as default, then start sqoop again, it's failed too, but at this time, the log is more readable, &amp;nbsp;"NOT INITALIZE CONTAINER" error show to me. this logs make me more confidential, it's really because job can't invoke container.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;at last, i stop all the cluster, delete /yarn/* &amp;nbsp; in datanode and namenode, then start all cluster, it works fine now.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;currently, i still don;t know why hdfs or yarn can't invoke container, but the problem has been resolved.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Nov 2014 14:29:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Cloudera-Support-Very-strange-issue-system-user-can-t-invoke/m-p/21225#M3460</guid>
      <dc:creator>iamfromsky</dc:creator>
      <dc:date>2014-11-05T14:29:56Z</dc:date>
    </item>
  </channel>
</rss>

