Member since
10-28-2020
304
Posts
14
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
110 | 10-18-2022 01:07 PM | |
261 | 09-05-2022 09:16 AM | |
413 | 08-18-2022 06:56 AM | |
634 | 08-05-2022 02:12 AM | |
679 | 08-03-2022 06:15 AM |
01-10-2023
09:53 PM
@anjel Thank you! I can see that Support has reached out to the ODBC driver team. We'll wait for their response.
... View more
01-10-2023
02:47 PM
1 Kudo
hey! so the problem was that i wasnt handling the open connection API call for elastic search due to which i was facing the error
... View more
01-03-2023
11:45 AM
Thanks for support, i done the same.
... View more
01-02-2023
12:01 AM
@saicharan This fails with the authentication error. HTTP Response Code : 401 Verify the authentication details you are passing in the Adapter configuration
... View more
12-01-2022
02:18 AM
Hi @d_liu Could be you took restart of HS2 but didn't logged in again from Hue, could be it is using same session. But HS2 restart should solve your problem.
... View more
11-21-2022
12:41 PM
@lysConsulting Have you ticked Kudu checkbox under Hive configuration in Cloudera Manager UI? Refer to: https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/kudu_hms.html#concept_enable_hms https://cwiki.apache.org/confluence/display/Hive/Kudu+Integration In CDP : https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/kudu-hms-integration/topics/kudu-hms-enabling.html
... View more
11-20-2022
08:55 PM
@hanumanth, Have the replies helped resolve your issue? If so, can you please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future?
... View more
11-11-2022
03:10 AM
Which version are you on? Is this Cloudera Hive distribution? I think UPDATE command is not working because it is not an acid(transactional) table. For update command to work, it has to be an ACID table. @roti
... View more
11-11-2022
02:59 AM
@Shawn Here is a small example on how to find out what's the percentage of NOT NULL value of column maker: select ((tot_cnt - a.null_maker)/tot_cnt)*100 as pcnt_not_null_maker
from
(select count(maker) as null_maker from used_cars where maker is NULL) a
Left outer join
(select count(*) as tot_cnt from used_cars) b You may try this for all individual columns.
... View more
10-18-2022
01:29 PM
@SwaggyPPPP Is this a partitioned table? In that case you could run the ALTER TABLE command as follows: alter table my_table add columns(field4 string,field5 string) CASCADE; Let us know if this issue occurs consistently, after adding new columns, and your Cloudera product version?
... View more
10-18-2022
01:07 PM
1 Kudo
@KPG1 We only support upgrading an existing cluster using Ambari or Cloudera Manager, instead of importing/updating the jars manually. In latest CDP Private cloud base, and our Public Cloud, we are using Hadoop version 3.1.1 at this point.
... View more
10-18-2022
12:56 PM
@ditmarh this might not work in scenarios where the table schema.table is created from Hive, and we are appending to it from Spark. You may try the following command, replacing saveAsTable with insertInto. df.write.mode("append").format("parquet").insertInto("schema.table")
... View more
09-27-2022
06:25 AM
May i know if the table was created from the data that was exported in some other format like 'txt' format or something ? if this is true then, starting from CDP 7.x versions, the default file format is parquet. So, when the table is imported, it will be created in parquet format, but its original files will in 'txt' format.
... View more
09-14-2022
01:07 PM
@Asim- Unless your final table has to be a Hive managed(acid) table then, you could incrementally update the Hive table directly using Sqoop. e.g. sqoop import --connect jdbc:oracle:thin:@xx.xx.xx.xx:1521:ORCL --table EMPLOYEE --username user1 --password welcome1 --incremental lastmodified --merge-key employee_id --check-column emp_timestamp --target-dir /usr/hive/warehouse/external/empdata/ Otherwise, the way you are trying is the actually the way Cloudera recommends it.
... View more
09-08-2022
07:51 AM
Thanks for your quick reply @smruti , really appreciated. I have gone through this approach and will surely consider it for DR strategy.
... View more
08-31-2022
05:05 AM
@mohammad_shamim Did you have Hive HA configured in CDH cluster, in that case, you need to make sure that there are equal number of HS2 instances created in the CDP cluster, because without that HA cannot be attained. Also, make sure that there is no Hiveserver2 instance created under "Hive" service in CDP. It should only be present under Hive on Tez service.
... View more
08-24-2022
05:10 AM
@ssuja, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-08-2022
01:56 AM
I didn't notice that the property "external" is case sensitive,the step 2 should be ALTER TABLE alter_test SET TBLPROPERTIES('EXTERNAL'='false'); ,then the location would be changed in CDP7.1.1. And In CDP 7.1.7, It does not work even if I set property "TRANSLATED_TO_EXTERNAL" to true after creating table ,could you try the steps and give an attachment? thanks.
... View more
08-04-2022
02:11 PM
@Imran_chaush Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
08-01-2022
12:40 PM
@Caliber The following command should work: # for hql in {a.hql,b.hql}; do beeline -n hive -p password --showheader=false --silent=true -f $hql; done
... View more
07-15-2022
07:25 PM
Could help to explain what's the meaning of first 2 sentence? Set Hive..... What would happen if we don't have 2 sentences? How would that impact our query? Thanks.
... View more
04-03-2022
05:11 AM
Thank you. We contacted Liquibase. Unfortunately, they don't have a built-in extension of Hive/Impala extension. Though they promise to look into this requirement in more detail. Hopefully, they come-up with a solution soon.
... View more
12-16-2021
12:23 PM
@Gcima009 are you trying to collect the logs with the same user that you submitted the job with? This query completed the map phase, and failed in reducer phase. If you are not able to collect the app logs, do check the HS2 log with the query ID hive_20211210173528_ff76c3df-a33b-41d0-b328-460c9b65deda if you get more information what caused the job to fail.
... View more
12-02-2021
08:16 PM
There is also an option of "Supported TLS versions" in Cloudera Manager under Security, search for SSL you will get the option of "Supported TLS versions" even after selecting TLSv1.2 from that option our security scans show that few ports from impala and some other services are open. Screenshot of Cloudera Manager is attached. Regards Hxn
... View more
11-17-2021
09:18 PM
@HareshAmin, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
11-16-2021
10:10 AM
Hi @Korez Then, pease consider setting the properties that I mentioned earlier. set hive.server2.tez.sessions.per.default.queue=3 //Number of AM containers/queue
set hive.server2.tez.initialize.default.sessions=true
set hive.prewarm.enabled=true
set hive.prewarm.numcontainers=2
set tez.am.container.reuse.enabled=true
set tez.am.container.idle.release-timeout-max.millis=20000
set tez.am.container.idle.release-timeout-min.millis=10000 This will help keep AM containers up and ready for a hive query.
... View more
11-14-2021
01:59 AM
@mhchethan Yes, you could mention multiple LDAP URLs separated by spaces. Hive will try the URLs in the mentioned order until a connection is successful. Ref: hive.server2.authentication.ldap.url
... View more
10-25-2021
01:31 PM
1 Kudo
@hxn Could you enter the password here instead of the the path to keystore pass file? > SSLKeyStorePwd=/var/lib/cloudera-scm-agent/agent-cert/cm-auto-host_key.pw
... View more
10-16-2021
06:09 AM
Yes, that is the issue. Schema across all the tables is not same. For that we are trying to fill in NULL while using SELECT statement on each table for those columns which are not present in that table. The changes for columns are not consistent accross tables. That is only few tables might update, f ew remain same.
... View more