<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How to reissue a delegated token after max lifetime passes for a spark streaming application on a Kerberized cluster in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-to-reissue-a-delegated-token-after-max-lifetime-passes/m-p/240826#M202630</link>
    <description>&lt;P&gt;Spark-submit looks fine, this issue will take more than a forum to resolve, would require code and logs analysis I'd say.&lt;/P&gt;&lt;P&gt;Meanwhile, I can only suggest to pass "-Dsun.security.krb5.debug=true" to the extraJavaOptions, and it would also help if you can set the following in log4j.properties file "log4j.logger.org.apache.spark.deploy.yarn.Client=DEBUG", then restart the application, hoping it will print more pointers. Also, if your KDC is an MIT KDC, double check that your principal has not set a 'Maximum Renewal Time' of 00:00:00 as explained &lt;A href="http://championofcyrodiil.blogspot.com/2014/01/kinit-ticket-expired-while-renewing.html"&gt; here&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Another property to try out, depending on your application use case that may help is to set:&lt;/P&gt;&lt;P&gt;--conf mapreduce.job.complete.cancel.delegation.tokens=false&lt;/P&gt;</description>
    <pubDate>Wed, 09 Jan 2019 03:48:24 GMT</pubDate>
    <dc:creator>dbompart</dc:creator>
    <dc:date>2019-01-09T03:48:24Z</dc:date>
  </channel>
</rss>

