Member since
09-26-2015
135
Posts
85
Kudos Received
26
Solutions
About
Steve's a hadoop committer mostly working on cloud integration
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3469 | 02-27-2018 04:47 PM | |
| 5933 | 03-03-2017 10:04 PM | |
| 3556 | 02-16-2017 10:18 AM | |
| 1886 | 01-20-2017 02:15 PM | |
| 11908 | 01-20-2017 02:02 PM |
01-14-2016
05:52 PM
Afraid not. The same keytab could be used if you had a local copy of it when you submitted work. Otherwise, when you submit a Spark job to the YARN cluster, it picks up your credentials, grabbing a Hive and HBase token if needed, and uses them for the duration of the job. Note that because those tokens expire after a day or two, you can't do long-lived applications that way. You will need a keytab, and spark 1.5, which is where keytab-based Spark application support went in,
... View more
01-14-2016
05:04 PM
1 Kudo
java.net.NoRouteToHostException is considered a failure that can be recovered from in any deployment with floating IP addresses. This was essentially the sole form of failover in Hadoop pre NN-HA (HADOOP-6667 added the check). I think we ought to revisit that decision
... View more
01-08-2016
07:58 PM
Note that this actually how to turn off the timeline server
... View more
01-08-2016
04:37 PM
I've just commented on the JIRA. I think you could try configuring hive to not use the timeline service —I've done that on other applications <property>
<name>yarn.timeline-service.enabled</name>
<value>false</value>
</property>
... View more
01-08-2016
04:33 PM
1 Kudo
Can I add that there's now a preview of Spark 1.6 on HDP: this one shouldn't overload the timeline server
... View more
01-08-2016
04:19 PM
BTW, if you try that login & renew at startup, you should be able to fail fast and not wait so long to find out things aren't working.
... View more
01-08-2016
04:18 PM
1 Kudo
Arvind, you may be using a combination of Hadoop & JDK that doesn't support keytab renewal. The later versions of Hadoop 2.6 (that's apache 2.6.2+ and HDP 2.2 maintenance releases) should work.
... View more
01-05-2016
03:36 PM
3 Kudos
1. The clickalbe link is: apache-spark-1-6-technical-preview-with-hdp-2-3 2, @vshukla: is there a tarball somewhere too? That's useful client-side
... View more
01-05-2016
03:26 PM
4 Kudos
Hmm, that's my code at fault there. What it's saying is that the history server can't find the class org.apache.spark.deploy.history.yarn.server.YarnHistoryProvider, which is what it is configured to use for histories. The workaround is to switch to the classic filesystem history provider, which you can do spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider Deleting the line entirely should force the history server to revert to this. I'd also look at deleting the line spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService If the timeline server integration is not on the classpath, the publisher is unlikely to be there too. If that doesn't solve it, comment on this issue and I'll see what else we can do.
... View more
01-05-2016
12:16 PM
1 Kudo
What version of Java are you using? I ask as some changes in Java 1.7 stopped ticket renewal in Hadoop 2.6.0/HDP2.2.0; it's been fixed in later Hadoop releases (2.6.2?) and later versions of HDP2.2+
... View more