Member since
12-11-2015
206
Posts
30
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
471 | 08-14-2024 06:24 AM | |
1460 | 10-02-2023 06:26 AM | |
1301 | 07-28-2023 06:28 AM | |
8476 | 06-02-2023 06:06 AM | |
653 | 01-09-2023 12:20 PM |
05-24-2023
08:29 PM
Is this the complete exception stack trace you have in logs. The stack is supposed to have traces of getPasswordFromCredentialProviders which I dont find and further more stack trace. Can you attach the entire application log.
... View more
05-24-2023
07:06 AM
d. Additionally please try disabling "Generate HADOOP_CREDSTORE_PASSWORD" in CM > Hive and Hive on Tez > configuration just to make sure the jceks generated by hive is not interfering with the creds you use.
... View more
05-24-2023
06:20 AM
Since the issue is intermittent it is unclear yet what is triggering the problem. It is not a must to have jceks. a. Do you see any pattern on the failure, like the issue happens only when the failing task runs on a particular node. b. How are you managing the edited core-site.xml, is it through cloudera-manager? May i know the safety valve used. c. Can you attach the complete application log for the failed run and successful run?
... View more
05-23-2023
04:35 PM
@ac-ntap Can you check the steps in the article https://my.cloudera.com/knowledge/How-to-configure-HDFS-and-Hive-to-use-different-JCEKS-and?id=326056 and let me know if that helps.
... View more
03-16-2023
03:11 PM
@Me Sorry for that confusion. I see what you mean now Per: https://impala.apache.org/docs/build/html/topics/impala_perf_stats.html#perf_stats_incremental COMPUTE INCREMENTAL STATS In Impala 2.1.0 and higher, you can use the COMPUTE INCREMENTAL STATS and DROP INCREMENTAL STATS commands. The INCREMENTAL clauses work with incremental statistics, a specialized feature for partitioned tables. When you compute incremental statistics for a partitioned table, by default Impala only processes those partitions that do not yet have incremental statistics. By processing only newly added partitions, you can keep statistics up to date without incurring the overhead of reprocessing the entire table each time. So the drop statistics is intended for "COMPUTE INCREMENTAL STATS" and not for " COMPUTE INCREMENTAL STATS with partition" May I know which version of CDP you are using, so that I can test on my end and confirm you.
... View more
03-14-2023
09:14 AM
Hi, This statement in the doc "In cases where new files are added to an existing partition, issue a REFRESH statement for the table, followed by a DROP INCREMENTAL STATS and COMPUTE INCREMENTAL STATS sequence for the changed partition." Applies specifically to a partition in which stats are already available but you added more data to that existing partition. If you are unsure about whether stats exist for a partition you can run show table stats <table_name>; and check the "Incremental stats" section Query: show table stats test_part
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
| b | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats | Location |
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
| 1 | 0 | 1 | 0B | NOT CACHED | NOT CACHED | TEXT | false | hdfs://xxxx:8020/user/hive/warehouse/test_part/b=1 |
| Total | -1 | 1 | 0B | 0B | | | | |
+-------+-------+--------+------+--------------+-------------------+--------+-------------------+--------------------------------------------------------------------------+
Fetched 2 row(s) in 5.60s If false, you can run COMPUTE INCREMENTAL STATS with PARTITION If true and you have added more data to this partition then you have to drop the stats and then run COMPUTE INCREMENTAL STATS with PARTITION
... View more
02-22-2023
12:12 PM
Hi. Yeah its expected when you have the common path for tgt cache for multiple user. Can you make the location unique for each different user - I haven't tested but I see an option in this link https://gpdb.docs.pivotal.io/6-3/admin_guide/kerberos-win-client.html Set up the Kerberos credential cache file. On the Windows system, set the environment variable KRB5CCNAME to specify the file system location of the cache file. The file must be named krb5cache. This location identifies a file, not a directory, and should be unique to each login on the server. When you set KRB5CCNAME, you can specify the value in either a local user environment or within a session. For example, the following command sets KRB5CCNAME in the session: set KRB5CCNAME=%USERPROFILE%\krb5cache
... View more
01-09-2023
12:20 PM
2 Kudos
You can set quota on /tmp - Once quota is reached further write on the directory will fail. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/scaling-namespaces/topics/hdfs-set-quotas-cm.html has the steps to enable quota
... View more
12-12-2022
08:10 AM
The jira HADOOP-9640 added the feature of allowing good users (not hogging NN rpc queue) to have fair response time. The explanation on how this works is available in this video https://www.youtube.com/watch?v=7Axz3bO18l8&ab_channel=DataWorksSummit
... View more
04-26-2020
11:17 PM
Can you share the exact steps/list of configuration you changed, to configure kerberos in kafka? During this time of failure in broker - What is the exact error you notice on zookeeper side? 4:29:51.371 AM ERROR ZooKeeperClient [ZooKeeperClient] Auth failed. Did you tweak any configuration on zookeeper too?
... View more