<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Yarn jobs are failing after enabling MIT-Kerberos in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290041#M214631</link>
    <description>&lt;P&gt;yes, you are in right direction. You can set&amp;nbsp;min.user.id to a value lower value like 500 and then re-submit the job&lt;/P&gt;</description>
    <pubDate>Wed, 19 Feb 2020 06:12:08 GMT</pubDate>
    <dc:creator>venkatsambath</dc:creator>
    <dc:date>2020-02-19T06:12:08Z</dc:date>
    <item>
      <title>Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289964#M214578</link>
      <description>&lt;P&gt;Hello Team,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have anabled MIT-Kerberos and integrated my cluster, Initialized the principals for hdfs, hbase and yarn.&lt;/P&gt;
&lt;P&gt;Able to access the hdfs and hbase tables.&lt;/P&gt;
&lt;P&gt;But when i am trying to run sample mapreduce job its getting failed, Find below error logs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;==&amp;gt;&lt;/STRONG&gt; yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen2&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Logs:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;WARN security.UserGroupInformation: PriviledgedActionException as:HTTP/hostname.org@FQDN.COM (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=HTTP, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=HTTP, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;hostname.org:~:HADOOP QA]$ klist&lt;BR /&gt;Ticket cache: FILE:/tmp/krb5cc_251473&lt;BR /&gt;Default principal: HTTP/hostname.org@FQDN.COM&lt;/P&gt;
&lt;P&gt;Valid starting Expires Service principal&lt;BR /&gt;02/18/20 01:55:32 02/19/20 01:55:32 krbtgt/FQDN.COM@FQDN.COM&lt;BR /&gt;renew until 02/23/20 01:55:32&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Can some one please check the issue and help us.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;
&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Tue, 18 Feb 2020 07:21:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289964#M214578</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-02-18T07:21:11Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289967#M214579</link>
      <description>&lt;P&gt;The klist result shows you are submitting job as&amp;nbsp; HTTP user&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hostname.org:~:HADOOP QA]$ klist&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Ticket cache: FILE:/tmp/krb5cc_251473&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Default principal: HTTP/hostname.org@FQDN.COM&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;WARN security.UserGroupInformation: PriviledgedActionException as:HTTP/hostname.org@FQDN.COM (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=HTTP, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The above error just implies you don't have write permission for HTTP user on /user directory. So you can either provide write permission for "others" for&amp;nbsp;/user in hdfs so that HTTP user can write or run the job after you kinit as user&amp;nbsp;mcaf which has write permission&lt;/P&gt;</description>
      <pubDate>Tue, 18 Feb 2020 07:55:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289967#M214579</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-02-18T07:55:32Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289975#M214587</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your response...!!&lt;/P&gt;&lt;P&gt;Actually we use mcaf as a user to execute the jobs but why http user coming to the picture ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hostname.com:~:HADOOP QA]$ groups&lt;BR /&gt;mcaf supergroup&lt;BR /&gt;hostname.com:~:HADOOP QA]$ users&lt;BR /&gt;mcaf&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;hostname.com:~:HADOOP QA]$ hadoop fs -ls /&lt;BR /&gt;Found 4 items&lt;BR /&gt;drwx------ - hbase supergroup 0 2020-02-18 02:46 /hbase&lt;BR /&gt;drwxr-xr-x - hdfs supergroup 0 2015-02-04 11:44 /system&lt;BR /&gt;drwxrwxrwt - hdfs supergroup 0 2020-02-17 05:07 /tmp&lt;BR /&gt;drwxr-xr-x - mcaf supergroup 0 2019-03-28 03:12 /user&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hostname.com:~:HADOOP QA]$ getent group supergroup&lt;BR /&gt;supergroup:x:25290:hbase,mcaf,zookeeper,hdfs&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hostname.com:~:HADOOP QA]$ getent group hadoop&lt;BR /&gt;hadoop:x:497:mapred,yarn,hdfs&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please have a look and suggest me what to do?&lt;BR /&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; I am trying to enable Kerberos and once it is running with out any interrupt or with out any issues, then we are planing to integrate with AD.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Tue, 18 Feb 2020 10:33:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/289975#M214587</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-02-18T10:33:59Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290024#M214622</link>
      <description>&lt;P&gt;Actually we use mcaf as a user to execute the jobs but why http user coming to the picture ?&lt;/P&gt;&lt;P&gt;--&amp;gt; By this do you mean, you switch to mcaf unix user[su - mcaf] and then run job? If yes, then its wrong. Post enabling kerberos hdfs and yarn recognises the user by the tgt and not by unix user id. So even if you su to mcaf and then have tgt as different user[say HTTP]. then yarn/hdfs recognises you by that tgt user.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you kinit mcaf, then run klist[to ensure you have mcaf tgt] and submit the job?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 03:42:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290024#M214622</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-02-19T03:42:11Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290039#M214629</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;First verified whether i am able to access hdfs before doing "kinit mcaf" and its failed to access.&lt;/P&gt;&lt;P&gt;Now i did kinit mcaf and verified hdfs access and able to list the files and able to create a directories.&lt;/P&gt;&lt;P&gt;Now i tried triggered sample yarn job,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hostname.com:~:HADOOP QA]$ yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen4&lt;/P&gt;&lt;P&gt;20/02/19 00:46:30 INFO client.RMProxy: Connecting to ResourceManager at resourcemanager/IP_ADDRESS:8032&lt;BR /&gt;20/02/19 00:46:30 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 8 for mcaf on ha-hdfs:nameservice1&lt;BR /&gt;20/02/19 00:46:30 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 8 for mcaf)&lt;BR /&gt;20/02/19 00:46:31 INFO terasort.TeraSort: Generating 500000000 using 2&lt;BR /&gt;20/02/19 00:46:31 INFO mapreduce.JobSubmitter: number of splits:2&lt;BR /&gt;20/02/19 00:46:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1582090413480_0002&lt;BR /&gt;20/02/19 00:46:31 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 8 for mcaf)&lt;BR /&gt;20/02/19 00:46:32 INFO impl.YarnClientImpl: Submitted application application_1582090413480_0002&lt;BR /&gt;20/02/19 00:46:32 INFO mapreduce.Job: The url to track the job: http://resourcemanager:8088/proxy/application_1582090413480_0002/&lt;BR /&gt;20/02/19 00:46:32 INFO mapreduce.Job: Running job: job_1582090413480_0002&lt;BR /&gt;20/02/19 00:46:34 INFO mapreduce.Job: Job job_1582090413480_0002 running in uber mode : false&lt;BR /&gt;20/02/19 00:46:34 INFO mapreduce.Job: map 0% reduce 0%&lt;BR /&gt;20/02/19 00:46:34 INFO mapreduce.Job: Job job_1582090413480_0002 failed with state FAILED due to: Application application_1582090413480_0002 failed 2 times due to AM Container for appattempt_1582090413480_0002_000002 exited with exitCode: -1000&lt;BR /&gt;For more detailed output, check application tracking page:http://resourcemanager:8088/proxy/application_1582090413480_0002/Then, click on links to logs of each attempt.&lt;BR /&gt;Diagnostics: Application application_1582090413480_0002 initialization failed (exitCode=255) with output: Requested user mcaf is not whitelisted and has id 779,which is below the minimum allowed 1000&lt;/P&gt;&lt;P&gt;Failing this attempt. Failing the application.&lt;BR /&gt;20/02/19 00:46:34 INFO mapreduce.Job: Counters: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please check it and let me know please.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 05:55:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290039#M214629</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-02-19T05:55:02Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290040#M214630</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;FYI...&lt;/P&gt;&lt;DIV class="property-name"&gt;min.user.id is set to 1000 in my Yarn configurations.&lt;/DIV&gt;&lt;DIV class="property-name"&gt;&lt;DIV class="property-name"&gt;allowed.system.users is set to&amp;nbsp;impala,nobody,llama,hive in&amp;nbsp;my Yarn configurations.&lt;/DIV&gt;&lt;DIV class="property-name"&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class="property-name"&gt;Thanks,&lt;/DIV&gt;&lt;DIV class="property-name"&gt;Vinod&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 19 Feb 2020 06:07:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290040#M214630</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-02-19T06:07:41Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290041#M214631</link>
      <description>&lt;P&gt;yes, you are in right direction. You can set&amp;nbsp;min.user.id to a value lower value like 500 and then re-submit the job&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 06:12:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290041#M214631</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-02-19T06:12:08Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290046#M214636</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After modifying the min user id value to 500 i can able to run sample mapreduce job and i can see it in yarn applications in cloudera manager.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now, I have tried with my regular job in same cluster, But it is failing and find below error messages,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ERROR 2020Feb19 02:01:21,086 main com.client.engineering.group.JOB.main.JOBMain: org.apache.hadoop.hbase.client.RetriesExhaustedException thrown: Can't get the location&lt;BR /&gt;org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location&lt;BR /&gt;at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:308) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:149) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.&amp;lt;init&amp;gt;(ClientScanner.java:135) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888) ~[JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at com.client.engineering.group.JOB.main.JOBMain.hasStagingData(JOBMain.java:304) [JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;at com.client.engineering.group.JOB.main.JOBMain.main(JOBMain.java:375) [JOB-0.0.31.jar:0.0.31]&lt;BR /&gt;Caused by: java.io.IOException: Broken pipe&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ERROR 2020Feb19 02:01:30,198 main com.client.engineering.group.job.main.jobMain: _v.1.0.0a_ org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException thrown: Failed 1 action: IOException: 1 time,&lt;BR /&gt;org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOTE: I have executed the kinit mcaf before executing my job.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And do we need to execute 'kinit mcaf' every time before submitting the job ?&lt;/P&gt;&lt;P&gt;And how can we configure scheduled jobs ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help me to understand.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 07:08:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290046#M214636</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-02-19T07:08:12Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290061#M214646</link>
      <description>&lt;LI-CODE lang="markup"&gt;ERROR 2020Feb19 02:01:21,086 main com.client.engineering.group.JOB.main.JOBMain: org.apache.hadoop.hbase.client.RetriesExhaustedException thrown: Can't get the location&lt;/LI-CODE&gt;&lt;P&gt;On this application which particular table are you trying to access? Did you validate if the user mcaf has permission to access the concerned table (&lt;A href="https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_hbase_authorization.html#topic_8_3_2" target="_blank"&gt;https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cdh_sg_hbase_authorization.html#topic_8_3_2&lt;/A&gt;&amp;nbsp;has the commands) If there is no permission for the concerned user, grant them required privileges.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you notice privileges required for mcaf are already provided. Then checking hbase master logs during the issue timeframe would give further clues.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Qn: And do we need to execute 'kinit mcaf' every time before submitting the job ? And how can we configure scheduled jobs ?&lt;/P&gt;&lt;P&gt;Ans: Yes and how are you scheduling the jobs? If its a shell script then you can include kinit command with mcaf's keytab which would avoid prompting for password&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 19 Feb 2020 09:01:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/290061#M214646</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-02-19T09:01:27Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/291126#M215302</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As you said i have kept the kinit commands in first step in my scripts and when ever we execute the commands the kinit also run. But still i am facing same issue but this time i can see zookeeper as a user,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The commands i am using,&amp;nbsp;&lt;/P&gt;&lt;P&gt;kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG&lt;BR /&gt;kinit -kt /home/mcaf/hdfs.keytab HTTP/hostname@Domain.ORG&lt;/P&gt;&lt;P&gt;kinit -kt /home/mcaf/hbase.keytab hbase/hostname@Domain.ORG&lt;/P&gt;&lt;P&gt;kinit -kt /home/mcaf/yarn.keytab HTTP/hostname@Domain.ORG&lt;BR /&gt;kinit -kt /home/mcaf/yarn.keytab yarn/hostname@Domain.ORG&lt;/P&gt;&lt;P&gt;kinit -kt /home/mcaf/zookeeper.keytab zookeeper/hostname@Domain.org&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error Logs,&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;20/03/04 02:00:42 WARN security.UserGroupInformation: PriviledgedActionException as:zookeeper/hostname@Domain.ORG (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6581)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6533)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4337)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4307)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4280)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:415)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)&lt;/P&gt;&lt;P&gt;org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;BR /&gt;org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6581)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6533)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4337)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4307)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4280)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you please help me on this issue?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2020 08:29:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/291126#M215302</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-05T08:29:14Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/291161#M215309</link>
      <description>&lt;LI-CODE lang="markup"&gt;The commands i am using, 
kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG
kinit -kt /home/mcaf/hdfs.keytab HTTP/hostname@Domain.ORG
kinit -kt /home/mcaf/hbase.keytab hbase/hostname@Domain.ORG
kinit -kt /home/mcaf/yarn.keytab HTTP/hostname@Domain.ORG
kinit -kt /home/mcaf/yarn.keytab yarn/hostname@Domain.ORG
kinit -kt /home/mcaf/zookeeper.keytab zookeeper/hostname@Domain.org&lt;/LI-CODE&gt;&lt;P&gt;You have to kinit as the user by which you want to access the data. In the above commands I see you are trying to run kinit as&amp;nbsp;hdfs, HTTP, hbase, yarn and zookeeper sequentially. When you run&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG&lt;/LI-CODE&gt;&lt;P&gt;It will write a tgt in location set by&amp;nbsp;&lt;STRONG&gt;KRB5CCNAME&lt;/STRONG&gt;(default is /tmp/krb5cc_[uid]). When you run the next kinit with hbase, the tgt acquired by previous command gets overwritten. In your case you are running multiple kinit and the last kinit was for the zookeeper user and hence the tgt will be available for zookeeper and all user prior to it gets overwritten. So use 1 kinit command with a user id intended for that application&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2020 10:11:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/291161#M215309</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-03-05T10:11:21Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292316#M216027</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;Sorry for late response.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your valuable response and i got your point where i am doing mistake.&lt;/P&gt;&lt;P&gt;Here i want to create a keytab file for a user and that user can access all the services like, hdfs, hbase and other services running in the cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tried with following steps please suggest me with your inputs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo ktutil&lt;/P&gt;&lt;P&gt;ktutil:&amp;nbsp; addent -password -p mcaf@Domain.ORG -k 1 -e RC4-HMAC&lt;/P&gt;&lt;P&gt;Password for mcaf@Domain.ORG:&lt;/P&gt;&lt;P&gt;ktutil:&amp;nbsp; wkt mcaf.keytab&lt;/P&gt;&lt;P&gt;ktutil:&amp;nbsp; q&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;klist -kt mcaf.keytab&lt;BR /&gt;Keytab name: FILE:mcaf.keytab&lt;BR /&gt;KVNO Timestamp Principal&lt;BR /&gt;---- ----------------- --------------------------------------------------------&lt;BR /&gt;1 03/23/20 11:58:38 mcaf@Domain.ORG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo kinit -kt mcaf.keytab mcaf@Domain.ORG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And able to access hdfs using,&lt;/P&gt;&lt;P&gt;hadoop fs -ls /&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But coming to hbase, i am not able to see the tables.&lt;/P&gt;&lt;P&gt;hbase(main):001:0&amp;gt; list&lt;BR /&gt;TABLE&lt;BR /&gt;0 row(s) in 0.4090 seconds&lt;/P&gt;&lt;P&gt;=&amp;gt; []&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When i copied the latest keytab from process directory for hbase-master,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;dayrhemwkq001:~:HADOOP QA]$ kinit -kt hbase.keytab hbase/dayrhemwkq001.enterprisenet.org@MWKRBCDH.ORG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I can able to see the tables.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My question is, I want to give a access to the user and that user can access hbase, hdfs and other services running in the luster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please suggest me with your inputs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2020 16:34:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292316#M216027</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-23T16:34:25Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292342#M216036</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/30470"&gt;@kvinod&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your issue can be resolved by merging the keytabs in question.&lt;/P&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Merge keytab files&lt;/STRONG&gt;&lt;/U&gt;&lt;BR /&gt;If you have multiple keytab files that need to be in one place, you can merge the keys with the ktutil command.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Depending on whether you are using MIT or Heimdal Kerberos the process is different but to merge keytab files using MIT Kerberos, use:&lt;/P&gt;&lt;P&gt;In the below example I am merging [&lt;STRONG&gt;mcaf.keytab&lt;/STRONG&gt;],[&lt;STRONG&gt;hbase.keytab&lt;/STRONG&gt;] and [&lt;STRONG&gt;zk.keytab&lt;/STRONG&gt;] into &lt;STRONG&gt;mcafmerged.keytab&lt;/STRONG&gt; you can merge n number of keytabs but you must ensure the user executing has the correct permissions, it could be a good idea to copy the keytabs and merge them from the users' home directory&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ ktutil
  ktutil: read_kt mcaf.keytab
  ktutil: read_kt hbase.keytab
  ktutil: read_kt zk.keytab
  ktutil: write_kt 
  ktutil: quit&lt;/LI-CODE&gt;&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;To verify the merge&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Use&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ klist -k mcafmerged.keytab&lt;/LI-CODE&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Now to access hbase&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ sudo kinit -kt mcafmerged.keytab mcaf@Domain.ORG&lt;/LI-CODE&gt;&lt;P&gt;The keytab file is independent of the computer it's created on, its filename, and its location in the file system. Once it's created, you can rename it, move it to another location on the same compute.&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2020 19:14:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292342#M216036</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2020-03-23T19:14:11Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292371#M216059</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/20288"&gt;@Shelton&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As you mention above I have done merging keytab files like, mcaf.keytab, yarn.keytab and other service keytabs.&lt;/P&gt;&lt;P&gt;Created mcafmerged.keytab and executed using kinit -kt mcafmerged.keytab mcaf@Domain.ORG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After the above process i am able to access hdfs, hbase tables using hbase shell and able to see yarn applications -list.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But when i run below sample yarn job,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen44&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Getting below error's,&lt;/P&gt;&lt;P&gt;Can't create directory /disk1/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied&lt;BR /&gt;Can't create directory /disk2/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied&lt;BR /&gt;Can't create directory /disk3/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied&lt;BR /&gt;Can't create directory /disk4/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied&lt;BR /&gt;Can't create directory /disk5/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied&lt;BR /&gt;Did not create any app directories.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And i gave a trail run of my application job and that is also failing with below errors,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location&lt;BR /&gt;at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:308) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:149) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ClientScanner.&amp;lt;init&amp;gt;(ClientScanner.java:135) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at com.class.name.dmxsloader.main.DMXSLoaderMain.hasStagingData(DMXSLoaderMain.java:304) [DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at com.class.name.dmxsloader.main.DMXSLoaderMain.main(DMXSLoaderMain.java:375) [DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;Caused by: java.io.IOException: Broken pipe&lt;BR /&gt;at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.7.0_67]&lt;BR /&gt;at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[?:1.7.0_67]&lt;BR /&gt;at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:1.7.0_67]&lt;BR /&gt;at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:1.7.0_67]&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) ~[?:1.7.0_67]&lt;BR /&gt;at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]&lt;BR /&gt;at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]&lt;BR /&gt;at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]&lt;/P&gt;&lt;P&gt;at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.7.0_67]&lt;BR /&gt;at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.7.0_67]&lt;BR /&gt;at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[?:1.7.0_67]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:246) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:234) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:895) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:850) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:31865) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1580) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1294) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1126) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:299) ~[DMXSLoader-0.0.31.jar:0.0.31]&lt;BR /&gt;... 10 more&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOTE: I have kept the below command in first line of my application script before going to launch the job,&lt;/P&gt;&lt;P&gt;kinit -kt mcafmerged.keytab&amp;nbsp; mcaf@MWKRBCDH.ORG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please let me know where i am missing here ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 12:22:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292371#M216059</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-24T12:22:18Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292389#M216063</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/30470"&gt;@kvinod&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;That's great that the initial issue was resolved with the keytab merge, but if I could ask why did you merge all the key tabs to&amp;nbsp;&lt;STRONG&gt;mcafmerged.keytab&lt;/STRONG&gt;&amp;nbsp; it could have been proper to merge only the Hbase and your &lt;STRONG&gt;mcaf keytab&lt;/STRONG&gt;&amp;nbsp;, anyway that said your subsequent is a permission issue on the directory&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;/disk1/yarn/nm/usercache/mcaf.&amp;nbsp;&lt;/STRONG&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Can you share the output of&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ ls  /disk1/yarn/nm/usercache&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;and&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ ls /disk1/yarn/nm/usercache/mcaf&lt;/LI-CODE&gt;&lt;P&gt;Can you try changing the permission&amp;nbsp; with the correct group for user mcaf i.e as the root user&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# chown -R mcaf:{group}  /disk1/yarn/nm/usercache/mcaf&lt;/LI-CODE&gt;&lt;P&gt;Then rerun the Terragen command&amp;nbsp; that should work.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Keep me posted&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 10:10:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292389#M216063</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2020-03-24T10:10:37Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292415#M216078</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/20288"&gt;@Shelton&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your immediate response.&lt;/P&gt;&lt;P&gt;Find below outputs,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HOSTNAME]$ ls /disk1/yarn/nm/usercache&lt;BR /&gt;mcaf&lt;BR /&gt;HOSTNAME]$ ls /disk1/yarn/nm/usercache/mcaf&lt;BR /&gt;appcache filecache&lt;BR /&gt;HOSTNAME]$ ls -lrt /disk1/yarn/nm/usercache/mcaf&lt;BR /&gt;total 20&lt;BR /&gt;drwx--x--- 397 yarn yarn 16384 Mar 4 01:18 filecache&lt;BR /&gt;drwx--x--- 2 yarn yarn 4096 Mar 4 02:22 appcache&lt;BR /&gt;HOSTNAME]$ ls -lrt /disk1/yarn/nm/usercache&lt;BR /&gt;total 4&lt;BR /&gt;drwxr-s--- 4 mcaf yarn 4096 Feb 24 01:26 mcaf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Q1, If we enable kerberos do we needs to modify permissions to the above directory?&lt;/P&gt;&lt;P&gt;And mcaf having sudo access.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Q2, We are using two edgenodes. Can i use the above merged.keytab in another edgenode ?&lt;/P&gt;&lt;P&gt;Or do i needs to generate them like what i did in current edgenode ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 12:30:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292415#M216078</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-24T12:30:31Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292423#M216086</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/30470"&gt;@kvinod&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;I can see the setuid bit (&lt;STRONG&gt;drwxr-s---&lt;/STRONG&gt;) was set which alters the standard behavior so that the group of the files created inside said directory, will not be that of the user who created them, but that of the parent directory itself&lt;/EM&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;$ ls -lrt /disk1/yarn/nm/usercache
total 4
drwxr-s--- 4 mcaf yarn 4096 Feb 24 01:26 mcaf&lt;/LI-CODE&gt;&lt;P&gt;&lt;EM&gt;Can you remove the setuid bit as the root user&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;# chmod -s /disk1/yarn/nm/usercache/mcaf&lt;/LI-CODE&gt;&lt;P&gt;&lt;EM&gt;Then rerun&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Question1. &lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;You don't need to explicitly change file permission when you enable Kerberos, it should work out of the box&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Question2. &lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;You don't need to regenerate a new mcafmerged.keytab just copy it to you other edge nodes it should work as that edge node is also part of the cluster&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Please revert&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 14:33:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292423#M216086</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2020-03-24T14:33:09Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292443#M216104</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/20288"&gt;@Shelton&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I ran above command you shared for removing 's' permissions for the directory.&lt;/P&gt;&lt;P&gt;And then i triggered same yarn sample job and facing same issue,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ERROR:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;20/03/24 12:29:24 INFO mapreduce.Job: Task Id : attempt_1585066027398_0003_m_000000_1, Status : FAILED&lt;BR /&gt;Application application_1585066027398_0003 initialization failed (exitCode=255) with output: main : command provided 0&lt;BR /&gt;main : user is mcaf&lt;BR /&gt;main : requested yarn user is mcaf&lt;BR /&gt;Can't create directory /disk1/yarn/nm/usercache/mcaf/appcache/application_1585066027398_0003 - Permission denied&lt;BR /&gt;Can't create directory /disk2/yarn/nm/usercache/mcaf/appcache/application_1585066027398_0003 - Permission denied&lt;BR /&gt;Can't create directory /disk3/yarn/nm/usercache/mcaf/appcache/application_1585066027398_0003 - Permission denied&lt;BR /&gt;Can't create directory /disk4/yarn/nm/usercache/mcaf/appcache/application_1585066027398_0003 - Permission denied&lt;BR /&gt;Can't create directory /disk5/yarn/nm/usercache/mcaf/appcache/application_1585066027398_0003 - Permission denied&lt;BR /&gt;Did not create any app directories&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Find below directory structure.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HOSTNAME]$ sudo ls -lrt /disk2/yarn/nm/usercache/mcaf/appcache&lt;BR /&gt;total 0&lt;BR /&gt;HOSTNAME]$ sudo ls -ld /disk2/yarn/nm/usercache/mcaf/appcache&lt;BR /&gt;drwx--x--- 2 yarn yarn 4096 Mar 4 02:22 /disk2/yarn/nm/usercache/mcaf/appcache&lt;BR /&gt;HOSTNAME]$ sudo ls -lrt /disk2/yarn/nm/usercache/mcaf&lt;BR /&gt;total 24&lt;BR /&gt;drwx--x--- 493 yarn yarn 20480 Mar 4 01:18 filecache&lt;BR /&gt;drwx--x--- 2 yarn yarn 4096 Mar 4 02:22 appcache&lt;BR /&gt;HOSTNAME]$ sudo ls -ld /disk2/yarn/nm/usercache/mcaf&lt;BR /&gt;drwxr-x--- 4 yarn yarn 4096 Feb 24 01:26 /disk2/yarn/nm/usercache/mcaf&lt;BR /&gt;HOSTNAME]$ sudo ls -ld /disk2/yarn/nm/usercache/&lt;BR /&gt;drwxr-xr-x 3 yarn yarn 4096 Feb 24 01:26 /disk2/yarn/nm/usercache/&lt;BR /&gt;HOSTNAME]$ sudo ls -lrt /disk2/yarn/nm/usercache&lt;BR /&gt;total 4&lt;BR /&gt;drwxr-x--- 4 yarn yarn 4096 Feb 24 01:26 mcaf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOTE: I have modified those permissions in all the servers.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 16:31:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292443#M216104</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-24T16:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292593#M216186</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/20288"&gt;@Shelton&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/13587"&gt;@venkatsambath&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can some one please help me to fix the issue ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Vinod&lt;/P&gt;</description>
      <pubDate>Thu, 26 Mar 2020 04:14:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292593#M216186</guid>
      <dc:creator>kvinod</dc:creator>
      <dc:date>2020-03-26T04:14:04Z</dc:date>
    </item>
    <item>
      <title>Re: Yarn jobs are failing after enabling MIT-Kerberos</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292607#M216199</link>
      <description>&lt;P&gt;These app cache directories gets auto generated upon job submission - So can you remove them from nodemanagers [so that it gets created fresh with required acls]&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;/disk{1,2,3,4,5}/yarn/nm/usercache/mcaf&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;and then re-submit the job again&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Mar 2020 06:06:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Yarn-jobs-are-failing-after-enabling-MIT-Kerberos/m-p/292607#M216199</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-03-26T06:06:14Z</dc:date>
    </item>
  </channel>
</rss>

