Member since
07-07-2020
97
Posts
5
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
315 | 05-31-2024 10:34 AM | |
1161 | 06-13-2022 10:24 PM | |
5040 | 06-09-2022 09:56 PM | |
1650 | 10-12-2021 07:13 AM | |
2726 | 09-22-2021 10:54 PM |
06-09-2022
09:56 PM
Hello Team, We have tested the java code internally and it worked fine for us. ClouderaJDBC version:- 2.6.27.1032 Java code:- ======== import java.sql.*; import java.math.*; public class test{ public static void main(String args[]){ try{ Class.forName("com.cloudera.impala.jdbc41.Driver"); Connection con=DriverManager.getConnection("jdbc:impala://<<hostname>> :21050;UseNativeQuery=1"); String sql = "upsert into user_info(id, name, address, email, insert_time) values (?,?,?,?,?)"; PreparedStatement statement = con.prepareStatement(sql); statement.setInt(1, 102); statement.setString(2, "Peter"); statement.setString(3, "New York"); statement.setString(4, "John@xyz.com"); statement.setTimestamp(5, java.sql.Timestamp.valueOf(java.time.LocalDateTime.now())); statement.addBatch(); statement.executeBatch(); statement.close(); con.close(); } catch(Exception e){ System.out.println(e); } } } Please let us know if it helps.
... View more
06-09-2022
12:29 AM
Please elaborate a little more on the issue:- Also please share the steps you are performing and share the table DDL for the same.
... View more
11-25-2021
01:23 AM
From where the query is submitted[Jdbc/ODBC, Impala-shell, Hue]. If it is from JDBC/ODBC then the query is generated to figure out column names. You can disable the feature that generates it with the PreparedMetadataLimitZero flag - see [page 90] https://www.cloudera.com/documentation/other/connectors/impala-jdbc/latest/Cloudera-JDBC-Driver-for-...
... View more
10-14-2021
03:41 AM
Hello, Please try the below command and let us know:- $ sqoop import --connect jdbc:mysql://localhost/employees --username hive --password hive --table departments --hcatalog-database default --hcatalog-table my_table_orc --create-hcatalog-table --hcatalog-storage-stanza --hcatalog-storage-stanza "stored as orc tblproperties (\"transactional\"=\"false\")" If it doesn’t work then the workaround is the two step process. 1. Create the ORC table in hive with the keyword external and set transactional to false 2. Then use the sqoop command to load the data into the orc table.
... View more
10-14-2021
02:51 AM
Hello, Please try to run the sqoop command as below and let us know how it goes:- $ sqoop import --connect jdbc:mysql://localhost/employees --username hive --password hive --table departments --hcatalog-database default --hcatalog-table my_table_orc --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile"
... View more
10-14-2021
12:22 AM
1 Kudo
Hello, The partition clause in the drop partition expect CONSTANT VALUE on the right-hand side and the functions inside the drop partition clause are not supported The correct syntax would be:- ALTER TABLE audit_logs DROP PARTITION (evt_date<‘some constant value ' )
... View more
10-13-2021
02:25 AM
Hello, Please try with the below connection URL and let us know how it goes jdbc:impala://nightly57-3.gce.cloudera.com:21050/default;AuthMech=1;SSL=1;KrbRealm=GCE.CLOUDERA.COM;KrbHostFQDN=nightly57-4.gce.cloudera.com;KrbServiceName=impala;SSLTrustStore=/etc/cdep-ssl-conf/CA_STANDARD/truststore.jks
... View more
10-12-2021
07:13 AM
1 Kudo
Hello, There is no such way to kill the query in one go. You need to either do it by one of the following methods:- You can kill the query from the Impala Daemon web UI of the Impala Daemon coordinating the query. or you can try killing from the browser. https://<query_coordinator_server_name>:25000/close_session?session_id=<session_id>
... View more
10-11-2021
10:29 PM
Hello, You can kill the query from the Impala Daemon web UI of the Impala Daemon coordinating the query. or you can try killing from the browser. https://<query_coordinator_server_name>:25000/close_session?session_id=<session_id> Please let us know if it helps
... View more
10-04-2021
01:58 AM
Hello, Are you trying to connect impala from spark via JDBC? if yes, we don't support this feature yet. please refer to the below document. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_621_unsupported_features.html#spark
... View more
- « Previous
-
- 1
- 2
- Next »