Member since
09-20-2016
38
Posts
9
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2927 | 08-08-2018 01:31 PM | |
3826 | 08-25-2017 06:13 AM | |
2636 | 09-28-2016 10:43 AM |
08-20-2018
09:03 AM
Setting spark.security.credentials.hiveserver2.enabled to false solved the problem. I can now use spark with LLAP in both Java and Python. Just R missing now. Will try to find out how to do it there aswell. Thanks for the help!
... View more
08-17-2018
12:02 PM
Thanks for the answer. But I have verified those setting atleast ten times now, and they are correct as far as I can see. This cluster worked with Spark + LLAP (even in Livy) with HDP 2.6.5, and most of these settings are the same.
... View more
08-17-2018
08:51 AM
I’m
upgrading one of our clusters right now to HDP 3.0 and the upgrade itself
worked fine. But after the upgrade, I just can’t get Spark with LLAP to work. This
is not a new feature for us, as we have been using this for as long as the
support have been there. As there is
some changes in the configuration, I’ve followed and change the config
according to both
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/integrating-hive/content/hive_hivewarehouseconnector_for_handling_apache_spark_data.html
and
https://github.com/hortonworks-spark/spark-llap/tree/master The
testcode I’m running is the following spark-shell
--master yarn --deploy-mode client --jars
/usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-1.0.0.3.0.0.0-1634.jar import com.hortonworks.hwc.HiveWarehouseSession
import com.hortonworks.hwc.HiveWarehouseSession._
val hive = HiveWarehouseSession.session(spark).build()
hive.showDatabases().show(100) The error I
get is the following. java.lang.RuntimeException:
java.sql.SQLException: Cannot create PoolableConnectionFactory (Could not open
client transport with JDBC Uri: jdbc:hive2://<server>:10501/;transportMode=http;httpPath=cliservice;auth=delegationToken:
Could not establish connection to jdbc:hive2:// <server>:10501/;transportMode=http;httpPath=cliservice;auth=delegationToken:
HTTP Response code: 401) The Hive
server show the following 2018-08-17T07:28:50,759
INFO [HiveServer2-HttpHandler-Pool:
Thread-175]: thrift.ThriftHttpServlet (ThriftHttpServlet.java:doPost(146)) -
Could not validate cookie sent, will try to generate a new cookie 2018-08-17T07:28:50,759
INFO [HiveServer2-HttpHandler-Pool:
Thread-175]: thrift.ThriftHttpServlet
(ThriftHttpServlet.java:doKerberosAuth(399)) - Failed to authenticate with
http/_HOST kerberos principal, trying with hive/_HOST kerberos principal 2018-08-17T07:28:50,760
ERROR [HiveServer2-HttpHandler-Pool: Thread-175]: thrift.ThriftHttpServlet
(ThriftHttpServlet.java:doKerberosAuth(407)) - Failed to authenticate with
hive/_HOST kerberos principal 2018-08-17T07:28:50,760
ERROR [HiveServer2-HttpHandler-Pool: Thread-175]: thrift.ThriftHttpServlet
(ThriftHttpServlet.java:doPost(210)) - Error: org.apache.hive.service.auth.HttpAuthenticationException:
java.lang.reflect.UndeclaredThrowableException at
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doKerberosAuth(ThriftHttpServlet.java:408)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:160)
[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
[javax.servlet-api-3.1.0.jar:3.1.0] at
javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[javax.servlet-api-3.1.0.jar:3.1.0] at
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:584)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.Server.handle(Server.java:534) [jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
[jetty-io-9.3.20.v20170531.jar:9.3.20.v20170531] at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
[jetty-io-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
[jetty-io-9.3.20.v20170531.jar:9.3.20.v20170531] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
[jetty-runner-9.3.20.v20170531.jar:9.3.20.v20170531] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[?:1.8.0_112] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[?:1.8.0_112] at java.lang.Thread.run(Thread.java:745)
[?:1.8.0_112] Caused by:
java.lang.reflect.UndeclaredThrowableException at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1706)
~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?] at org.apache.hive.service.cli.thrift.ThriftHttpServlet.doKerberosAuth(ThriftHttpServlet.java:405)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] ... 25 more Caused by:
org.apache.hive.service.auth.HttpAuthenticationException: Kerberos
authentication failed: at org.apache.hive.service.cli.thrift.ThriftHttpServlet$HttpKerberosServerAction.run(ThriftHttpServlet.java:464)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet$HttpKerberosServerAction.run(ThriftHttpServlet.java:413)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at
javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doKerberosAuth(ThriftHttpServlet.java:405)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] ... 25 more Caused by:
org.ietf.jgss.GSSException: Defective token detected (Mechanism level:
GSSHeader did not find the right tag) at
sun.security.jgss.GSSHeader.<init>(GSSHeader.java:97) ~[?:1.8.0_112] at
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:306)
~[?:1.8.0_112] at
sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
~[?:1.8.0_112] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet$HttpKerberosServerAction.run(ThriftHttpServlet.java:452)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet$HttpKerberosServerAction.run(ThriftHttpServlet.java:413)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] at
java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at
javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?] at
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doKerberosAuth(ThriftHttpServlet.java:405)
~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634] ... 25 more I can see
that it complains about the Kerberos ticket, but I do have a valid key in my
session. Running any other Kerberos access like beeline works fine from the
same session. Does
anybody have any clue about this error?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
08-08-2018
01:31 PM
Problem
solved If you have
created a hbase or phoenix table through hive as an “internal” table, it will be created as a managed table
with the storage handler towards hbase/phoenix. This is what’s causing the
problem. Managed tables with the hbase/phoenix storagehandler won’t work in the
“Move Hive Tables” part of the upgrade (external tables works ofc). I had to
manually remove those tables from the Hive metadatabase and then the “Move Hive
Tables” part of the upgrade works fine.
... View more
08-08-2018
11:14 AM
Update: It looks
like it's related to one or more tables. I have around 50 databases in Hive. I
selected each and every one of them with the -d flag (regexp of databasename)
and I only get the 255 exit code on 4 of the databases. All the others are
working fine. I will now try
to pinpoint exactly what tables in those databases are causing the error and see
if I can find anything strange with them.
... View more
08-08-2018
10:05 AM
I'm using MySQL for the metastore. No errors in the log
... View more
08-08-2018
08:51 AM
I’m trying
to migrate to HDP 3.0, but the upgrade hangs on “Move Hive Table”. When I look
at the log, the actual move command just exit without any error messages and
with exit code 255. This happens even when I try to run the command manually.
So it’s kind of hard to understand what the real problem is as I get no output
at all from the command. Only time I get
something in return is when I add the -h and I get the help output. /usr/hdp/3.0.0.0-1634/hive/bin/hive
--config /etc/hive/conf --service
strictmanagedmigration --hiveconf hive.strict.managed.tables=true -m automatic --modifyManagedTables
--oldWarehouseRoot /apps/hive/warehouse Can anybody
help me to better understand why it’s exiting with code 255 and if possible how
to solve it.
... View more
Labels:
08-25-2017
06:13 AM
4 Kudos
So, the main reason we see this error is because the default
behaviour for ssl cert verification have changed in python. If you take a look
in /etc/python/cert-verification.cfg, you will see that in python-libs-2.7.5-34,
the “verify=disable” value was default. But after upgrade of that package to python-libs-2.7.5-58,
the value is now “verify=platform_default”. And at least in our system, that
means enabled. After changing this back to “verify=disable”, the synchronization
works again without having to do the workaround I wrote about earlier. I have
verified this on a non-upgraded system by changing it to enabled and that also
results in errors for the user synchronization This error also affect LLAP if you are running that. After upgrade,
LLAP wont start because of cert verifications. You will get a “[SSL:
CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)” error
message. Changing the verify parameter described above also fixes that problem.
... View more
08-24-2017
10:30 AM
1 Kudo
@Aaron Norton One way you can work around this problem is to change
"SERVER_API_HOST = '127.0.0.1'" in
/usr/lib/python2.6/site-packages/ambari_server/serverUtils.py so it points to
your server with the full hostname. That will work around the problem with SSL
that we see.
... View more
08-23-2017
12:54 PM
I just got the exact same error. And the problem came after
a “yum update” on a Redhat 7 server. I tested the synchronization just before I
upgraded the OS and it worked fine. After the upgrade, I get the same error. I’ll
post an answer once I find a solution to the problem
... View more