Member since
12-11-2015
244
Posts
31
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
223 | 07-22-2025 07:58 AM | |
823 | 01-02-2025 06:28 AM | |
1416 | 08-14-2024 06:24 AM | |
2942 | 10-02-2023 06:26 AM | |
2249 | 07-28-2023 06:28 AM |
07-07-2023
06:35 AM
@raph Can you please clarify more on the custom parcel? Is it a parcel provided by cloudera / its partners ?
... View more
06-05-2023
02:57 PM
Thats interesting. Let me make sure my understanding is correct. 1. So after configuring password through jceks, there are failures if "Generate HADOOP_CREDSTORE_PASSWORD" is unchecked. Can I please have the full exception, just want to make sure if the full exception points to anything else in latest failure. 2. Did you also remove the properties you already had added i.e "<name>fs.s3a.access.key</name> <name>fs.s3a.secret.key</name>" in core-site.xml
... View more
06-02-2023
06:06 AM
Sorry about the typo in the article. " Uncomment the property "Generate HADOOP_CREDSTORE_PASSWORD" from Hive service and Hive on Tez service. This is the flag to enable or disable the generation of HADOOP_CREDSTORE_PASSWORD (generate_jceks_password)." It is not uncomment and must be uncheck. So please uncheck "Generate HADOOP_CREDSTORE_PASSWORD" in Hive and Hive on Tez, save and restart Hive,Hive on Tez
... View more
06-01-2023
06:41 AM
Hi @ac-ntap This exception is not related. please ignore this and kindly review my other comment "https://community.cloudera.com/t5/Support-Questions/Hive-query-failed-with-java-io-IOException-Cannot-find/m-p/371879/highlight/true#M241106"
... View more
05-31-2023
01:43 PM
I am sorry for the delay. This is the exception I was looking for. Specifically we can notice the failure was due to hive trying to pickup password from its own jceks rather than using the plain text password. Still its unclear why it falls back to jceks instead of using password. I will check with existing known issues. For the timebeing Can you try configuring the password through jceks as indicated in this article https://my.cloudera.com/knowledge/How-to-configure-HDFS-and-Hive-to-use-different-JCEKS-and?id=326056 You can ignore step 9 to 11 from the article. sg-cdp is the bucket name, so instead of fs.s3a.bucket.scc-803070-bucket-1.security.credential.provider.path please use the property fs.s3a.bucket.sg-cdp.security.credential.provider.path
... View more
05-24-2023
08:29 PM
Is this the complete exception stack trace you have in logs. The stack is supposed to have traces of getPasswordFromCredentialProviders which I dont find and further more stack trace. Can you attach the entire application log.
... View more
05-24-2023
07:06 AM
d. Additionally please try disabling "Generate HADOOP_CREDSTORE_PASSWORD" in CM > Hive and Hive on Tez > configuration just to make sure the jceks generated by hive is not interfering with the creds you use.
... View more
05-24-2023
06:20 AM
Since the issue is intermittent it is unclear yet what is triggering the problem. It is not a must to have jceks. a. Do you see any pattern on the failure, like the issue happens only when the failing task runs on a particular node. b. How are you managing the edited core-site.xml, is it through cloudera-manager? May i know the safety valve used. c. Can you attach the complete application log for the failed run and successful run?
... View more
05-23-2023
04:35 PM
@ac-ntap Can you check the steps in the article https://my.cloudera.com/knowledge/How-to-configure-HDFS-and-Hive-to-use-different-JCEKS-and?id=326056 and let me know if that helps.
... View more
03-16-2023
03:11 PM
@Me Sorry for that confusion. I see what you mean now Per: https://impala.apache.org/docs/build/html/topics/impala_perf_stats.html#perf_stats_incremental COMPUTE INCREMENTAL STATS In Impala 2.1.0 and higher, you can use the COMPUTE INCREMENTAL STATS and DROP INCREMENTAL STATS commands. The INCREMENTAL clauses work with incremental statistics, a specialized feature for partitioned tables. When you compute incremental statistics for a partitioned table, by default Impala only processes those partitions that do not yet have incremental statistics. By processing only newly added partitions, you can keep statistics up to date without incurring the overhead of reprocessing the entire table each time. So the drop statistics is intended for "COMPUTE INCREMENTAL STATS" and not for " COMPUTE INCREMENTAL STATS with partition" May I know which version of CDP you are using, so that I can test on my end and confirm you.
... View more