Member since
07-07-2020
95
Posts
4
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
820 | 06-13-2022 10:24 PM | |
3705 | 06-09-2022 09:56 PM | |
1255 | 10-12-2021 07:13 AM | |
2223 | 09-22-2021 10:54 PM |
11-25-2021
01:23 AM
From where the query is submitted[Jdbc/ODBC, Impala-shell, Hue]. If it is from JDBC/ODBC then the query is generated to figure out column names. You can disable the feature that generates it with the PreparedMetadataLimitZero flag - see [page 90] https://www.cloudera.com/documentation/other/connectors/impala-jdbc/latest/Cloudera-JDBC-Driver-for-...
... View more
10-14-2021
03:41 AM
Hello, Please try the below command and let us know:- $ sqoop import --connect jdbc:mysql://localhost/employees --username hive --password hive --table departments --hcatalog-database default --hcatalog-table my_table_orc --create-hcatalog-table --hcatalog-storage-stanza --hcatalog-storage-stanza "stored as orc tblproperties (\"transactional\"=\"false\")" If it doesn’t work then the workaround is the two step process. 1. Create the ORC table in hive with the keyword external and set transactional to false 2. Then use the sqoop command to load the data into the orc table.
... View more
10-14-2021
02:51 AM
Hello, Please try to run the sqoop command as below and let us know how it goes:- $ sqoop import --connect jdbc:mysql://localhost/employees --username hive --password hive --table departments --hcatalog-database default --hcatalog-table my_table_orc --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile"
... View more
10-14-2021
12:22 AM
1 Kudo
Hello, The partition clause in the drop partition expect CONSTANT VALUE on the right-hand side and the functions inside the drop partition clause are not supported The correct syntax would be:- ALTER TABLE audit_logs DROP PARTITION (evt_date<‘some constant value ' )
... View more
10-13-2021
02:25 AM
Hello, Please try with the below connection URL and let us know how it goes jdbc:impala://nightly57-3.gce.cloudera.com:21050/default;AuthMech=1;SSL=1;KrbRealm=GCE.CLOUDERA.COM;KrbHostFQDN=nightly57-4.gce.cloudera.com;KrbServiceName=impala;SSLTrustStore=/etc/cdep-ssl-conf/CA_STANDARD/truststore.jks
... View more
10-12-2021
07:13 AM
1 Kudo
Hello, There is no such way to kill the query in one go. You need to either do it by one of the following methods:- You can kill the query from the Impala Daemon web UI of the Impala Daemon coordinating the query. or you can try killing from the browser. https://<query_coordinator_server_name>:25000/close_session?session_id=<session_id>
... View more
10-11-2021
10:29 PM
Hello, You can kill the query from the Impala Daemon web UI of the Impala Daemon coordinating the query. or you can try killing from the browser. https://<query_coordinator_server_name>:25000/close_session?session_id=<session_id> Please let us know if it helps
... View more
10-04-2021
01:58 AM
Hello, Are you trying to connect impala from spark via JDBC? if yes, we don't support this feature yet. please refer to the below document. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_621_unsupported_features.html#spark
... View more
10-04-2021
01:49 AM
Hello, Please use the either of the methods and let us know if it helps:- Step 1:- Pass the --hs2-url correctly sqoop import --connect <<Connection string>> --username <<username>> --password <<password>> --table “ta”ble_name --target-dir “<<“directory path>> --delete-target-dir --hive-import --hs2-url "jdbc:hive2://hs2host:10000/default;principal=hive/hs2host@DOMAIN.COM;ssl=true;sslTrustStore=/etc/cdep-ssl-conf/CA_STANDARD/truststore.jks;trustStorePassword=password" --hs2-user username --hs2-keytab "/path/to/sqooptestkeytab" If you are launching the sqoop job from CLI, you don’t need to pass the hs2-user and hs2 key tab. You can remove the parameters and run the sqoop command after doing the kinit from CLI or you can pass in the sqoop command itself like above. Step 2:- If you don’t want to add the hs2_url Add the following 2 properties in the sqoop configuration and restart sqoop 1. Sqoop Client Advanced Configuration Snippet (Safety Valve) for sqoop-conf/sqoop-site.xml : Name: sqoop.beeline.env.preserve Value: HADOOP_CLIENT_OPTS 2. Sqoop Client Advanced Configuration Snippet (Safety Valve) for sqoop-conf/sqoop-env.sh: export HADOOP_CLIENT_OPTS="-Djline.terminal=jline.UnsupportedTerminal"
... View more
10-04-2021
01:28 AM
Hello, Please find the sample impala oozie action and check if it helps. 1 Create a file impala_invalidate.sh export PYTHON_EGG_CACHE=/tmp/impala_eggs impalacmd="impala-shell -k -s impala --quiet --impalad=host-10-17-100-110.coe.cloudera.com:21000" ${impalacmd} --query="show databases;" > /tmp/impala_databases 2 Create the workflow for the shell action <workflow-app name="Impala_redirect" xmlns="uri workflow:0.5"> <start to="shell-11a3"/> <kill name="Kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <action name="shell-11a3"> <shell xmlns="uri shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <exec>impala_invalidate.sh</exec> <file>/user/systest/lib/impala_invalidate.sh#impala_invalidate.sh</file> <file>/user/systest/lib/impala.keytab#impala.keytab</file> <capture-output/> </shell> <ok to="End"/> <error to="Kill"/> </action> <end name="End"/> </workflow-app>
... View more
- « Previous
-
- 1
- 2
- Next »