Member since
11-24-2017
76
Posts
8
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3349 | 05-14-2018 10:28 AM | |
6202 | 03-28-2018 12:19 AM | |
3170 | 02-07-2018 02:54 AM | |
3463 | 01-26-2018 03:41 AM | |
4806 | 01-05-2018 02:06 AM |
04-06-2018
01:59 AM
Hello everyone
I need to run a Spark-SQL action on an Hive table. I am having problems on authentication (the cluster is Kerberos-secured).
I've tried first with hive2 credentials because they work with my other hive2 actions, my I got a failure (I suppose this type of credentials can only be used with hive2 actions?):
2018-04-06 08:37:21,831 [Driver] INFO org.apache.hadoop.hive.ql.session.SessionState - No Tez session required at this point. hive.execution.engine=mr.
2018-04-06 08:37:22,117 [Driver] INFO hive.metastore - Trying to connect to metastore with URI thrift://trmas-fc2d552a.azcloud.local:9083
2018-04-06 08:37:22,153 [Driver] ERROR org.apache.thrift.transport.TSaslTransport - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
[...]
I've also tried with hcat credentials, but with this one I got a START_RETRY state of the actin with the following error:
JA009: org.apache.hive.hcatalog.common.HCatException : 9001 : Exception occurred while processing HCat request : TException while getting delegation token.. Cause : org.apache.thrift.transport.TTransportException
This is the workflow.xml:
<workflow-app
xmlns="uri:oozie:workflow:0.5" name="oozie_spark_wf">
<credentials>
<credential name="hive2_credentials" type="hive2">
<property>
<name>hive2.jdbc.url</name>
<value>jdbc:hive2://trmas-fc2d552a.azcloud.local:10000/default;ssl=true</value>
</property>
<property>
<name>hive2.server.principal</name>
<value>hive/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL</value>
</property>
</credential>
<credential name="hcat_cred" type="hcat">
<property>
<name>hcat.metastore.uri</name>
<value>thrift://trmas-fc2d552a.azcloud.local:9083</value>
</property>
<property>
<name>hcat.metastore.principal</name>
<value>hive/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL</value>
</property>
</credential>
</credentials>
<start to="spark_action"/>
<action cred="hcat_cred" name="spark_action">
<spark
xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/icon0104/output"/>
</prepare>
<master>yarn-cluster</master>
<mode>cluster</mode>
<name>OozieSpark</name>
<class>my.Main</class>
<jar>/home/icon0104/oozie/ooziespark/lib/ooziespark-1.0.jar</jar>
<spark-opts>--files ${nameNode}/user/icon0104/oozie/ooziespark/hive-site.xml</spark-opts>
</spark>
<ok to="END_NODE"/>
<error to="KILL_NODE"/>
</action>
<kill name="KILL_NODE">
<message>${wf:errorMessage(wf:lastErrorNode())}</message>
</kill>
<end name="END_NODE"/>
</workflow-app>
This is the hive-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!--Autogenerated by Cloudera Manager-->
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://trmas-fc2d552a.azcloud.local:9083</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>300</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.warehouse.subdir.inherit.perms</name>
<value>true</value>
</property>
<property>
<name>hive.auto.convert.join</name>
<value>false</value>
</property>
<property>
<name>hive.auto.convert.join.noconditionaltask.size</name>
<value>20971520</value>
</property>
<property>
<name>hive.optimize.bucketmapjoin.sortedmerge</name>
<value>false</value>
</property>
<property>
<name>hive.smbjoin.cache.rows</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.logging.operation.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/var/log/hive/operation_logs</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>67108864</value>
</property>
<property>
<name>hive.exec.copyfile.maxsize</name>
<value>33554432</value>
</property>
<property>
<name>hive.exec.reducers.max</name>
<value>1099</value>
</property>
<property>
<name>hive.vectorized.groupby.checkinterval</name>
<value>4096</value>
</property>
<property>
<name>hive.vectorized.groupby.flush.percent</name>
<value>0.1</value>
</property>
<property>
<name>hive.compute.query.using.stats</name>
<value>false</value>
</property>
<property>
<name>hive.vectorized.execution.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.vectorized.execution.reduce.enabled</name>
<value>false</value>
</property>
<property>
<name>hive.merge.mapfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.mapredfiles</name>
<value>false</value>
</property>
<property>
<name>hive.cbo.enable</name>
<value>false</value>
</property>
<property>
<name>hive.fetch.task.conversion</name>
<value>minimal</value>
</property>
<property>
<name>hive.fetch.task.conversion.threshold</name>
<value>268435456</value>
</property>
<property>
<name>hive.limit.pushdown.memory.usage</name>
<value>0.1</value>
</property>
<property>
<name>hive.merge.sparkfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.smallfiles.avgsize</name>
<value>16777216</value>
</property>
<property>
<name>hive.merge.size.per.task</name>
<value>268435456</value>
</property>
<property>
<name>hive.optimize.reducededuplication</name>
<value>true</value>
</property>
<property>
<name>hive.optimize.reducededuplication.min.reducer</name>
<value>4</value>
</property>
<property>
<name>hive.map.aggr</name>
<value>true</value>
</property>
<property>
<name>hive.map.aggr.hash.percentmemory</name>
<value>0.5</value>
</property>
<property>
<name>hive.optimize.sort.dynamic.partition</name>
<value>false</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
<property>
<name>spark.executor.memory</name>
<value>268435456</value>
</property>
<property>
<name>spark.driver.memory</name>
<value>268435456</value>
</property>
<property>
<name>spark.executor.cores</name>
<value>4</value>
</property>
<property>
<name>spark.yarn.driver.memoryOverhead</name>
<value>26</value>
</property>
<property>
<name>spark.yarn.executor.memoryOverhead</name>
<value>26</value>
</property>
<property>
<name>spark.dynamicAllocation.enabled</name>
<value>true</value>
</property>
<property>
<name>spark.dynamicAllocation.initialExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.minExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.maxExecutors</name>
<value>2147483647</value>
</property>
<property>
<name>hive.metastore.execute.setugi</name>
<value>true</value>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>trmas-6b8bc78c.azcloud.local,trmas-c9471d78.azcloud.local,trmas-fc2d552a.azcloud.local</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hive.zookeeper.namespace</name>
<value>hive_zookeeper_namespace_CD-HIVE-LTqXUcrR</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>trmas-6b8bc78c.azcloud.local,trmas-c9471d78.azcloud.local,trmas-fc2d552a.azcloud.local</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive/_HOST@AZCLOUD.LOCAL</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>hive/_HOST@AZCLOUD.LOCAL</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>true</value>
</property>
<property>
<name>spark.shuffle.service.enabled</name>
<value>true</value>
</property>
</configuration>
In Oozie configuration I have the following credentials classes enabled:
hcat=org.apache.oozie.action.hadoop.HCatCredentials,hbase=org.apache.oozie.action.hadoop.HbaseCredentials,hive2=org.apache.oozie.action.hadoop.Hive2Credentials
Can anyone help? What am I missing?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
-
Apache Spark
-
Kerberos
04-06-2018
01:43 AM
I've solved the jdbc issue by enabling SSL in the connection string: jdbc:impala://trwor-dafb587f.azcloud.local:21050;SSL=1;AuthMech=1;KrbAuthType=0;KrbHostFQDN=trwor-dafb587f.azcloud.local;KrbServiceName=impala Still not luck with impala-shell connection. If I run "klist" I got: Ticket cache: FILE:/tmp/krb5cc_699006375_ASnf44
Default principal: icon0104@AZCLOUD.LOCAL
Valid starting Expires Service principal
04/06/2018 08:38:44 04/06/2018 18:38:44 krbtgt/AZCLOUD.LOCAL@AZCLOUD.LOCAL
renew until 04/13/2018 08:38:44 Thanks for the support
... View more
04-02-2018
12:41 PM
Update: I've tried to switch to ClouderaImpalaJDBC_2.5.43.1063 driver (using JDBC41). With the following connection string (to infer authentication): jdbc:impala://trwor-dafb587f.azcloud.local:21050;AuthMech=1;KrbAuthType=0;KrbHostFQDN=trwor-dafb587f.azcloud.local;KrbServiceName=impala Now the error shown is the following: java.sql.SQLException: [Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: [Simba][ImpalaJDBCDriver](500169) Unable to connect to server: [Simba][ImpalaJDBCDriver](500591) Kerberos Authentication failed..
at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createClient(Unknown Source)
at com.cloudera.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown Source)
at com.cloudera.impala.core.ImpalaJDBCConnection.establishConnection(Unknown Source)
at com.cloudera.jdbc.core.LoginTimeoutConnection.connect(Unknown Source)
at com.cloudera.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.cloudera.jdbc.common.AbstractDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at ico.az.deploy.TestSuite.testTeradata(TestSuite.java:101)
at ico.az.deploy.TestSuite.run(TestSuite.java:314)
Caused by: com.cloudera.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: [Simba][ImpalaJDBCDriver](500169) Unable to connect to server: [Simba][ImpalaJDBCDriver](500591) Kerberos Authentication failed..
... 11 more
Caused by: java.lang.RuntimeException: [Simba][ImpalaJDBCDriver](500169) Unable to connect to server: [Simba][ImpalaJDBCDriver](500591) Kerberos Authentication failed.
at com.cloudera.hivecommon.api.HiveServerPrivilegedAction.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.cloudera.hivecommon.api.HiveServer2ClientFactory.createClient(Unknown Source)
at com.cloudera.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown Source)
at com.cloudera.impala.core.ImpalaJDBCConnection.establishConnection(Unknown Source)
at com.cloudera.jdbc.core.LoginTimeoutConnection.connect(Unknown Source)
at com.cloudera.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.cloudera.jdbc.common.AbstractDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at ico.az.deploy.TestSuite.testTeradata(TestSuite.java:101)
at ico.az.deploy.TestSuite.run(TestSuite.java:314)
at ico.az.deploy.TestSuite.main(TestSuite.java:350)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:258)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
... 15 more Please let me know if is there anything else I can try.
... View more
04-02-2018
11:15 AM
Hello everybody I am working on a CDH 5.13.2 cluster configured with Kerberos and LDAP authentication. I need to connect to Impala thorugh jdbc and impala-shell, but I am having problems on both (Impala queries on HUE work fine). For impala-shell I've tried: impala-shell -k -i trwor-b9a4f2a7.azcloud.local ---> Starting Impala Shell using Kerberos authentication Using service name 'impala' Error connecting: TTransportException, TSocket read 0 bytes *********************************************************************************** Welcome to the Impala shell. (Impala Shell v2.10.0-cdh5.13.2 (dc867db) built on Fri Feb 2 10:46:38 PST 2018) I've also tried without Kerberos: impala-shell -i trwor-b9a4f2a7.azcloud.local
--->
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, TSocket read 0 bytes
Kerberos ticket found in the credentials cache, retrying the connection with a secure transport.
Error connecting: TTransportException, TSocket read 0 bytes
*********************************************************************************** Welcome to the Impala shell. (Impala Shell v2.10.0-cdh5.13.2 (dc867db) built on Fri Feb 2 10:46:38 PST 2018)
In both cases I got a TTransportException. I am having trouble also for connecting to Impala through jdbc (using Cloudera_ImpalaJDBC4_2.5.5.1007 driver): String impalaConnectionUrl = "jdbc:impala://trwor-dafb587f.azcloud.local:21050;AuthMech=1;KrbRealm=AZCLOUD.LOCAL;KrbHostFQDN=trwor-dafb587f.azcloud.local;KrbServiceName=impala";
try {
Connection impalaConn = DriverManager.getConnection(impalaConnectionUrl);
[...]
}
catch (SQLEception ex) {
[...]
} ----> java.sql.SQLException: [Simba][ImpalaJDBCDriver](500310) Invalid operation: Unable to connect to server:; at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createTransport(HiveServer2ClientFactory.java:224) at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createClient(HiveServer2ClientFactory.java:52) at com.cloudera.impala.hivecommon.core.HiveJDBCConnection.connect(HiveJDBCConnection.java:597) at com.cloudera.impala.jdbc.common.BaseConnectionFactory.doConnect(BaseConnectionFactory.java:219) at com.cloudera.impala.jdbc.common.AbstractDriver.connect(AbstractDriver.java:216) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:233) at ico.az.deploy.TestSuite.testTeradata(TestSuite.java:98) at ico.az.deploy.TestSuite.run(TestSuite.java:311) Caused by: com.cloudera.impala.support.exceptions.GeneralException: [Simba][ImpalaJDBCDriver](500310) Invalid operation: Unable to connect to server:; ... 9 more Caused by: java.lang.RuntimeException: Unable to connect to server: at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory$1.run(HiveServer2ClientFactory.java:150) at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory$1.run(HiveServer2ClientFactory.java:141) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createTransport(HiveServer2ClientFactory.java:140) at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createClient(HiveServer2ClientFactory.java:52) at com.cloudera.impala.hivecommon.core.HiveJDBCConnection.connect(HiveJDBCConnection.java:597) at com.cloudera.impala.jdbc.common.BaseConnectionFactory.doConnect(BaseConnectionFactory.java:219) at com.cloudera.impala.jdbc.common.AbstractDriver.connect(AbstractDriver.java:216) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:233) at ico.az.deploy.TestSuite.testTeradata(TestSuite.java:98) at ico.az.deploy.TestSuite.run(TestSuite.java:311) at ico.az.deploy.TestSuite.main(TestSuite.java:347) Caused by: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:258) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory$1.run(HiveServer2ClientFactory.java:146) ... 13 more Regarding connection string parameters: hostname : the host name where is running an Impala daemon, I took this one from Cloudera Manager->Impala->Instances->Impala daemon (there is one deamon running in each worker node, thus I've just choosen the first one). port : taken from Impala Daemon HiveServer2 Port property property in the Impala Configuration. AuthMech : according to the jdbc driver documentation 1 is for Kerberos authentication. KrbRealm : I took this one from the param default_realm in the /etc/krb5.conf file on the edge node, is this correct? KrbHostFQDN : same as Impala daemon hostname, correct? KrbServiceName : should be "impala" the default, and it is also the nameof Impala Kerberos Principal on the CM, correct? These are the relevant properties I found on the Cloudera Manager (read only access) for Impala and Kerberos: I am trying Kerberos authentication because it seems LDAP authentication is disabled for Impala: What am I doing wrong?
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
-
Kerberos
03-28-2018
12:19 AM
2 Kudos
I've finally solved using the executeUpdate method: // invalidate metadata and rebuild index on Impala
try {
Statement stmt = impalaConn.createStatement();
try {
String query = "INVALIDATE METADATA;";
int result = stmt.executeUpdate(query);
while (resultSet.next()) {
// do something
}
}
finally {
stmt.close();
}
}
catch(SQLException ex) {
while (ex != null)
{
ex.printStackTrace();
ex = ex.getNextException();
}
System.exit(1);
} Thanks for help!
... View more
03-25-2018
10:22 AM
I have a java program where I need to do some Impala queries through JDBC, but I need to invalidate metadata before running these queries. How can I do that trough Cloudera Impala jdbc driver? I've tried the following: // invalidate metadata and rebuild index on Impala
try {
Statement stmt = impalaConn.createStatement();
try {
String query = "INVALIDATE METADATA;";
ResultSet resultSet = stmt.executeQuery(query);
while (resultSet.next()) {
// do something
}
}
finally {
stmt.close();
}
}
catch(SQLException ex) {
while (ex != null)
{
ex.printStackTrace();
ex = ex.getNextException();
}
System.exit(1);
} but I got this error: java.sql.SQLDataException: [Simba][JDBC](11300) A ResultSet was expected but not generated from query "INVALIDATE METADATA;". Query not executed.
at com.cloudera.impala.exceptions.ExceptionConverter.toSQLException(ExceptionConverter.java:136)
at com.cloudera.impala.jdbc.common.SStatement.checkCondition(SStatement.java:2274)
at com.cloudera.impala.jdbc.common.SStatement.executeNoParams(SStatement.java:2704)
at com.cloudera.impala.jdbc.common.SStatement.executeQuery(SStatement.java:880)
at ico.az.deploy.TestSuite.testTeradata(TestSuite.java:103)
at ico.az.deploy.TestSuite.run(TestSuite.java:310)
at ico.az.deploy.TestSuite.main(TestSuite.java:345) I am using Cloudera_ImpalaJDBC4_2.5.5.1007 driver. Thanks for any help!
... View more
Labels:
- Labels:
-
Apache Impala
02-27-2018
03:10 AM
1 Kudo
Not sure where to post this question, let me know if this is the wrong section. I've an Oozie bundle with some coordinators inside which import data from various sources and generate hive tables with some transformations. This is scheduled once every day. I need to design a rollback procedure that brings the cluster to the status of the previous day. I was thinking to add these two operations before starting the daily import/transformation tasks: Make a snapshot (backup) of the hdfs hive data in a backup folder Make a backup of the Hive Metastore database (MySQL) Then when I need to rollback I can stop the current Oozie bundle, overwrite the hdfs Hive data with the data in the backup folder and restore the Hive Metastore database. My questions: Is this going to work? Or are there critical problems that I am not seeing? Which approach do you guys suggest to support rollback in a CLoudera environment? Thanks for any information
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
-
Apache Sqoop
-
HDFS
02-13-2018
03:41 AM
Hi everyone, I am trying to get jdbc connection to Hive working (I have to do some hive queries from java program). I've succesfully established connection to Impala with the following driver and connection string: Class.forName("com.cloudera.impala.jdbc4.Driver"); String url = "jdbc:impala://worker01:21050;AuthMech=0" But I can't open connection with Hive. I've tried the followings: Class.forName("com.cloudera.hive.jdbc4.HS2Driver");
String url = "jdbc:hive2://master:10000;UID=cloudera;PWD=cloudera";
// EXCEPTION ---->
java.sql.SQLException: [Simba][HiveJDBCDriver](500310) Invalid operation: Peer indicated failure: Error validating the login;
at com.cloudera.hive.hivecommon.api.HiveServer2ClientFactory.createTransport(HiveServer2ClientFactory.java:224)
at com.cloudera.hive.hive.api.ExtendedHS2Factory.createClient(ExtendedHS2Factory.java:38)
at com.cloudera.hive.hivecommon.core.HiveJDBCConnection.connect(HiveJDBCConnection.java:597)
at com.cloudera.hive.jdbc.common.BaseConnectionFactory.doConnect(BaseConnectionFactory.java:219)
at com.cloudera.hive.jdbc.common.AbstractDriver.connect(AbstractDriver.java:216)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247) Class.forName("com.cloudera.hive.jdbc4.HS2Driver");
String url = "jdbc:hive2://master:10000;AuthMech=0";
// EXCEPTION ---->
[Simba][HiveJDBCDriver](500150) Error setting/closing connection: Open Session Error. at com.cloudera.hive.hive.api.ExtendedHS2Client.openSession(ExtendedHS2Client.java:1107) at com.cloudera.hive.hivecommon.api.HS2Client.<init>(HS2Client.java:139) at com.cloudera.hive.hive.api.ExtendedHS2Client.<init>(ExtendedHS2Client.java:474) at com.cloudera.hive.hive.api.ExtendedHS2Factory.createClient(ExtendedHS2Factory.java:39) at com.cloudera.hive.hivecommon.core.HiveJDBCConnection.connect(HiveJDBCConnection.java:597) at com.cloudera.hive.jdbc.common.BaseConnectionFactory.doConnect(BaseConnectionFactory.java:219) at com.cloudera.hive.jdbc.common.AbstractDriver.connect(AbstractDriver.java:216) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) Class.forName("com.cloudera.hive.jdbc4.HS1Driver");
String url = "jdbc:hive://master:10000;AuthMech=0";
// EXCEPTION ---->
[Simba][HiveJDBCDriver](500151) Error setting/closing session: Server version error. at com.cloudera.hive.hive.api.HS1Client.openSession(HS1Client.java:1090) at com.cloudera.hive.hive.api.HS1Client.<init>(HS1Client.java:166) at com.cloudera.hive.hive.api.HiveServer1ClientFactory.createClient(HiveServer1ClientFactory.java:61) at com.cloudera.hive.hivecommon.core.HiveJDBCConnection.connect(HiveJDBCConnection.java:597) at com.cloudera.hive.jdbc.common.BaseConnectionFactory.doConnect(BaseConnectionFactory.java:219) at com.cloudera.hive.jdbc.common.AbstractDriver.connect(AbstractDriver.java:216) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) Port 10000 is open. I can connect to hive2 server using beeline with the following bash commands: beeline
!connect jdbc:hive2://localhost:10000 cloudera cloudera org.apache.hive.jdbc.HiveDriver I have no Kerberos or LDAP enabled. Which is the correct way to establish connection to Hive with jdbc?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Cloudera Manager
02-07-2018
02:54 AM
Solved, I was looking in the wrong section:
... View more
02-07-2018
02:07 AM
I need to submit a large Oozie workflow to import many tables from Teradata, but I got the following error: E0736: Workflow definition length [325,117] exceeded maximum allowed length [100,000] I searched for this error and it seems we can increase the workflow allowed size with this property in the oozie-site.xml: <property>
<name>oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength</name>
<value>100000000</value>
<description>
The maximum length of the workflow definition in bytes
An error will be reported if the length exceeds the given maximum
</description>
</property> I've tried to add that property in the Oozie configuration through Cloudera Manager (tried both in the Oozie Service Environment Advanced Configuration Snippet (Safety Valve) and in Oozie Server Environment Advanced Configuration Snippet (Safety Valve) ) like so: oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength=10000000 In both cases Cloudera Manager gives an error complaining that it is an invalid variable: Could not parse: Oozie Service Environment Advanced Configuration Snippet (Safety Valve) : Could not parse parameter 'oozie_env_safety_valve'. Was expecting: valid variable name. Input: oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength=10000000 Could not parse: Oozie Server Environment Advanced Configuration Snippet (Safety Valve) : Could not parse parameter 'OOZIE_SERVER_role_env_safety_valve'. Was expecting: valid variable name. Input: oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength=10000000 Does anyone know how to solve?
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Manager