Member since
11-04-2015
260
Posts
44
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2509 | 05-16-2024 03:10 AM | |
1531 | 01-17-2024 01:07 AM | |
1551 | 12-11-2023 02:10 AM | |
2289 | 10-11-2023 08:42 AM | |
1572 | 09-07-2023 01:08 AM |
01-05-2022
01:11 AM
Hi @syedshakir, can you clarify what query was failing this way? I suppose it was an insert query from Impala. Was it a dynamic partitioning insert? Have you checked the Impala query profile? Please look into the impala daemon (coordinator host) logs and check the full error message - if that's available. You may also check if there was another insert or insert overwrite query for the same target table. From CM > Impala > Queries page you can search out queries manipulating the same table.
... View more
01-04-2022
08:43 AM
2 Kudos
Hi @cardozogp , With sqoop import (DB -> HDFS) sqoop will submit just a couple of (or in range of max 100s) of queries to the database: - 1 query to get the table schema - a couple of queries to determine the split ranges (see "split-by") - and every mapper starts 1 select statement with their own split key range. The mappers from then on just call "getNext" on the result sets to fetch the next batch of results from the DB. With sqoop export (HDFS->DB) however the mappers are calling separate insert statements for every "batch" (~100 s of records), so there may be millions of insert statements submitted, in that case this indeed has an impact. As there are really just a few queries in play with sqoop import, the usage of prepared statements does not have a significant benefit. Let us know if your DBA team has some other insights which we may not be aware of. Best regards Miklos
... View more
12-30-2021
09:08 AM
1 Kudo
Hi Saurabh, The "Establishing SSL connection..." message already suggests that the MySQL client (in this case Cloudera Manager code "DbCommandExecutor") is using TLS/SSL coonection to the MySQL database. The last "caused by" tells the reason for the failure: Caused by: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate) This happens when - The MySQL client does not allow a specific cipher suite (it has some restrictions) - and the MySQL server does not support any of the cipher suites which the client allows. Please check what settings are on the MySQL server side: mysql --ssl-ca=<path_to/truststore.pem> -uroot -p -e "SHOW GLOBAL VARIABLES LIKE '%ssl%';STATUS;" and check the CM server side JDK configuration settings in: /usr/java/latest/jre/lib/security/java.security Look for ProtocolsWhiteList, CipherWhiteList and CipherBlackList settings.
... View more
11-25-2021
01:19 AM
Hi, This use case seems to be similar to queries which have a filter "WHERE 1=0" - those also do not return any data. See for example such a discussion why that is needed/used in some places: https://stackoverflow.com/questions/9140606/why-would-you-use-where-1-0-statement-in-sql My best guess would be that the application calling such queries needs the schema of the result set in advance - for example to create and render the layout (columns) or autofilters or to prepare another subsequent query properly (for example it has to do some decisions - which filters the next query needs to contain, maybe they are different for a string and for a date column, etc) Best regards Miklos Miklos Szurap Customer Operations Engineer, Cloudera
... View more
11-22-2021
07:06 AM
Hi, the complex MAP data type is supported, however it may not be as usable as other simple types. Please see: https://docs.cloudera.com/runtime/7.2.10/impala-sql-reference/topics/impala-map.html and https://docs.cloudera.com/runtime/7.2.10/impala-sql-reference/topics/impala-complex-types.html#complex_types/complex_sample_schema There is a similar "GET_JSON_OBJECT" built-in UDF, which may help you - depending what you need to achieve, please see: https://docs.cloudera.com/runtime/7.2.10/impala-sql-reference/topics/impala-misc-functions.html#misc_functions__get_json_object
... View more
11-15-2021
06:21 AM
You've mentioned that even if you intentionally "break" the workflow with changing the class name it still does not throw an error message. Can you check what's in the workflow.xml when you change the workflow? This would help to decide is Hue not saving the workflow properly or is the logging broken only. Is the workflow successful even with a wrong classname?
... View more
11-05-2021
07:24 AM
Hi, Your approach seems ok, we need to figure out what went wrong. After the "Executing Oozie Launcher with tokens" and the logging of the token retrievals you should see something like: Executing Action Main with tokens: ... Main class : org.apache.oozie.action.hadoop.JavaMain ... Launcher class: class org.apache.oozie.action.hadoop.JavaMain ... >>> Invoking Main class now >>> Hello world! <<< Invocation of Main class completed <<< Do you observe such issues only with Java actions? Are the other action types working fine? Can you check the workflow.xml through the following? 1. Open your workflow definition in Hue 2. At the right click the three-dotted icon > Workspace 3. Click the "workflow.xml" to open it It should have an "<action name=...><java>" definiton with your latest edits in it. Best regards Mike
... View more
02-10-2021
01:23 AM
Hi Zara, You need to check a couple of things - which may give more clues on the why. 1. Are you using the latest Hive JDBC driver? (is it HDP or CDH/CDP?) Check out https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-11.html 2. Do you get the same error if you use backticks for the database name and/or for the table name? select count(*) from `dbname`.`tablename` where partition_key='xxxxx'; 3. Did you look at the HiveServer2 logs? What do they show you? What was the real query submitted to the HS2 and what was the failure there? What does it look like for a non-partitioned table? 4. The "UseNativeQuery" option of the JDBC driver sometimes can solve such interesting issues - as it is a flag whether the Hive query is already translated to a Hive native format. Check the docs. https://docs.cloudera.com/documentation/other/connectors/hive-jdbc/2-6-11/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf 5. Also you can turn on debug level logs at the JDBC driver side, see LogLevel and LogPath JDBC connection string properties.
... View more
10-05-2020
01:03 AM
I would not advise to do this - only if you have no other options and you are sure there are classpath problems (this suggests you had that - likely not having set up the Spark service dependency with Hive). Always check the HS2 logs first what is your problem. With including all the hadoop jars to SPARK_DIST_CLASSPATH you submit and upload lots of jars to the container classpath unnecessarily, which will slow down the job submission time.
... View more