<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: map reduce 2.0 throwing error after enabling kerberos security in cloudera 5.4.3. in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30915#M6927</link>
    <description>&lt;P&gt;Depending on how you have set up yarn hive should be part of the "allowed.system.users" list for the NM's that list will white list all system users below the "min.user.id".&lt;/P&gt;&lt;P&gt;There is also a list of "banned.users" that lists all users that are not allowed to run containers.&lt;/P&gt;&lt;P&gt;All these three need to be in sync to allow running a container.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The hdfs user should not be allowed since it is the superuser and could circumvent the HDFS access permissions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you execute a job from hue authentication is taken&amp;nbsp; care of by hue. It will make sure that some kind of kerberos initialisation is performed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Wilfred&lt;/P&gt;</description>
    <pubDate>Wed, 19 Aug 2015 06:17:36 GMT</pubDate>
    <dc:creator>Wilfred</dc:creator>
    <dc:date>2015-08-19T06:17:36Z</dc:date>
    <item>
      <title>map reduce 2.0 throwing error after enabling kerberos security in cloudera 5.4.3.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30633#M6924</link>
      <description>&lt;P&gt;I am getting the below error after enbaling kerberos security in CDH 5.4.3.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hive&amp;gt; select count(*) from hive_test;&lt;BR /&gt;Query ID = hdfs_20150810165757_e7420efe-67e7-4a75-bf78-0d1383f7cc09&lt;BR /&gt;Total jobs = 1&lt;BR /&gt;Launching Job 1 out of 1&lt;BR /&gt;Number of reduce tasks determined at compile time: 1&lt;BR /&gt;In order to change the average load for a reducer (in bytes):&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to limit the maximum number of reducers:&lt;BR /&gt;&amp;nbsp; set hive.exec.reducers.max=&amp;lt;number&amp;gt;&lt;BR /&gt;In order to set a constant number of reducers:&lt;BR /&gt;&amp;nbsp; set mapreduce.job.reduces=&amp;lt;number&amp;gt;&lt;BR /&gt;Starting Job = job_1439195152382_0012, Tracking URL = &lt;A href="http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439195152382_0012/" target="_blank"&gt;http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439195152382_0012/&lt;/A&gt;&lt;BR /&gt;Kill Command = /usr/lib/hadoop/bin/hadoop job&amp;nbsp; -kill job_1439195152382_0012&lt;BR /&gt;Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0&lt;BR /&gt;2015-08-10 16:57:19,271 Stage-1 map = 0%,&amp;nbsp; reduce = 0%&lt;BR /&gt;Ended Job = job_1439195152382_0012 with errors&lt;BR /&gt;Error during job, obtaining debugging information...&lt;BR /&gt;FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask&lt;BR /&gt;MapReduce Jobs Launched:&lt;BR /&gt;Stage-Stage-1:&amp;nbsp; HDFS Read: 0 HDFS Write: 0 FAIL&lt;BR /&gt;Total MapReduce CPU Time Spent: 0 msec&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:37:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30633#M6924</guid>
      <dc:creator>soma123</dc:creator>
      <dc:date>2022-09-16T09:37:25Z</dc:date>
    </item>
    <item>
      <title>Re: map reduce 2.0 throwing error after enabling kerberos security in cloudera 5.4.3.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30712#M6925</link>
      <description>&lt;P&gt;There is something really strange with this job.&lt;/P&gt;&lt;P&gt;The job should at least do something but the submitted job shows: "number of mappers: 0; number of reducers: 0"&lt;/P&gt;&lt;P&gt;A count(*) should at least have 1 mapper (go over the input) and 1 reducer (to sum up).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does this happen for every hive job?&lt;/P&gt;&lt;P&gt;Can you run a simple MapReduce example like pi on the cluster to make sure that the cluster works at that level?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Wilfred&lt;/P&gt;</description>
      <pubDate>Wed, 12 Aug 2015 06:42:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30712#M6925</guid>
      <dc:creator>Wilfred</dc:creator>
      <dc:date>2015-08-12T06:42:37Z</dc:date>
    </item>
    <item>
      <title>Re: map reduce 2.0 throwing error after enabling kerberos security in cloudera 5.4.3.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30880#M6926</link>
      <description>&lt;P&gt;Hi Wilfred,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many thanks for the reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-bash-4.1$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 100&lt;BR /&gt;Number of Maps&amp;nbsp; = 10&lt;BR /&gt;Samples per Map = 100&lt;BR /&gt;Wrote input for Map #0&lt;BR /&gt;Wrote input for Map #1&lt;BR /&gt;Wrote input for Map #2&lt;BR /&gt;Wrote input for Map #3&lt;BR /&gt;Wrote input for Map #4&lt;BR /&gt;Wrote input for Map #5&lt;BR /&gt;Wrote input for Map #6&lt;BR /&gt;Wrote input for Map #7&lt;BR /&gt;Wrote input for Map #8&lt;BR /&gt;Wrote input for Map #9&lt;BR /&gt;Starting Job&lt;BR /&gt;15/08/18 15:57:14 INFO client.RMProxy: Connecting to ResourceManager at hdp-poc2.tcshydnextgen.com/10.138.90.72:8032&lt;BR /&gt;15/08/18 15:57:14 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 40 for hdfs on ha-hdfs:nameservice1&lt;BR /&gt;15/08/18 15:57:14 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 40 for hdfs)&lt;BR /&gt;15/08/18 15:57:14 INFO input.FileInputFormat: Total input paths to process : 10&lt;BR /&gt;15/08/18 15:57:14 INFO mapreduce.JobSubmitter: number of splits:10&lt;BR /&gt;15/08/18 15:57:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439544552504_0019&lt;BR /&gt;15/08/18 15:57:15 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 40 for hdfs)&lt;BR /&gt;15/08/18 15:57:15 INFO impl.YarnClientImpl: Submitted application application_1439544552504_0019&lt;BR /&gt;15/08/18 15:57:15 INFO mapreduce.Job: The url to track the job: &lt;A href="http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/" target="_blank"&gt;http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/&lt;/A&gt;&lt;BR /&gt;15/08/18 15:57:15 INFO mapreduce.Job: Running job: job_1439544552504_0019&lt;BR /&gt;15/08/18 15:57:17 INFO mapreduce.Job: Job job_1439544552504_0019 running in uber mode : false&lt;BR /&gt;15/08/18 15:57:17 INFO mapreduce.Job:&amp;nbsp; map 0% reduce 0%&lt;BR /&gt;15/08/18 15:57:17 INFO mapreduce.Job: Job job_1439544552504_0019 failed with state FAILED due to: Application application_1439544552504_0019 failed 2 times due to AM Container for appattempt_1439544552504_0019_000002 exited with&amp;nbsp; exitCode: -1000&lt;BR /&gt;For more detailed output, check application tracking page:&lt;A href="http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/Then," target="_blank"&gt;http://hdp-poc2.tcshydnextgen.com:8088/proxy/application_1439544552504_0019/Then,&lt;/A&gt; click on links to logs of each attempt.&lt;BR /&gt;Diagnostics: Application application_1439544552504_0019 initialization failed (exitCode=255) with output: Requested user hdfs is not whitelisted and has id 493,which is below the minimum allowed 1000&lt;BR /&gt;&lt;BR /&gt;Failing this attempt. Failing the application.&lt;BR /&gt;15/08/18 15:57:17 INFO mapreduce.Job: Counters: 0&lt;BR /&gt;Job Finished in 3.528 seconds&lt;BR /&gt;java.io.FileNotFoundException: File does not exist: hdfs://nameservice1/user/hdfs/QuasiMonteCarlo_1439893630990_2073090894/out/reduce-out&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1132)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1750)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.io.SequenceFile$Reader.&amp;lt;init&amp;gt;(SequenceFile.java:1774)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at java.lang.reflect.Method.invoke(Method.java:601)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at java.lang.reflect.Method.invoke(Method.java:601)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.util.RunJar.run(RunJar.java:221)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;at org.apache.hadoop.util.RunJar.main(RunJar.java:136)&lt;BR /&gt;-bash-4.1$&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Note: I am learning Hadoop and I am a newbie to Hadoop. and as a part of learning I am executing various jobs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here are the actions performed:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) MR job has been executed with hdfs user and su&amp;nbsp; command is used to login as hdfs user and kinit hdfs command has been executed before MR job command&lt;/P&gt;&lt;P&gt;has been executed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Above error is observed.&lt;/P&gt;&lt;P&gt;From the above it is clear that hdfs is not a whitelisted user as its uidNumber is &amp;lt; 1000.&lt;/P&gt;&lt;P&gt;Is this whitelist error responsible for the hive query failure? and when hive query is executed with hdfs user then error2 is reported.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If my observation is correct may I know why hive query result is not giving exact root cause of the failure i.e. white listed error message which is&lt;/P&gt;&lt;P&gt;observed during MR job execution.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is not related to above query, but could you please provide answer to our query..&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) With a local user a hive query is executed from hue and also from hive directly.&lt;/P&gt;&lt;P&gt;without issuing kinit command hive query is successful from hue but not from hive.&lt;/P&gt;&lt;P&gt;So, my question is:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Are operations performed from hue are independent of kerberos security? If yes, may I know the reason for the same?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Aug 2015 11:30:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30880#M6926</guid>
      <dc:creator>soma123</dc:creator>
      <dc:date>2015-08-18T11:30:28Z</dc:date>
    </item>
    <item>
      <title>Re: map reduce 2.0 throwing error after enabling kerberos security in cloudera 5.4.3.</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30915#M6927</link>
      <description>&lt;P&gt;Depending on how you have set up yarn hive should be part of the "allowed.system.users" list for the NM's that list will white list all system users below the "min.user.id".&lt;/P&gt;&lt;P&gt;There is also a list of "banned.users" that lists all users that are not allowed to run containers.&lt;/P&gt;&lt;P&gt;All these three need to be in sync to allow running a container.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The hdfs user should not be allowed since it is the superuser and could circumvent the HDFS access permissions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you execute a job from hue authentication is taken&amp;nbsp; care of by hue. It will make sure that some kind of kerberos initialisation is performed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Wilfred&lt;/P&gt;</description>
      <pubDate>Wed, 19 Aug 2015 06:17:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/map-reduce-2-0-throwing-error-after-enabling-kerberos/m-p/30915#M6927</guid>
      <dc:creator>Wilfred</dc:creator>
      <dc:date>2015-08-19T06:17:36Z</dc:date>
    </item>
  </channel>
</rss>

