Member since
10-28-2020
554
Posts
45
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3651 | 07-23-2024 11:49 PM | |
511 | 05-28-2024 11:06 AM | |
902 | 05-05-2024 01:27 PM | |
582 | 05-05-2024 01:09 PM | |
616 | 03-28-2024 09:51 AM |
10-18-2022
01:29 PM
@SwaggyPPPP Is this a partitioned table? In that case you could run the ALTER TABLE command as follows: alter table my_table add columns(field4 string,field5 string) CASCADE; Let us know if this issue occurs consistently, after adding new columns, and your Cloudera product version?
... View more
10-18-2022
01:07 PM
1 Kudo
@KPG1 We only support upgrading an existing cluster using Ambari or Cloudera Manager, instead of importing/updating the jars manually. In latest CDP Private cloud base, and our Public Cloud, we are using Hadoop version 3.1.1 at this point.
... View more
09-27-2022
06:25 AM
May i know if the table was created from the data that was exported in some other format like 'txt' format or something ? if this is true then, starting from CDP 7.x versions, the default file format is parquet. So, when the table is imported, it will be created in parquet format, but its original files will in 'txt' format.
... View more
09-14-2022
01:07 PM
@Asim- Unless your final table has to be a Hive managed(acid) table then, you could incrementally update the Hive table directly using Sqoop. e.g. sqoop import --connect jdbc:oracle:thin:@xx.xx.xx.xx:1521:ORCL --table EMPLOYEE --username user1 --password welcome1 --incremental lastmodified --merge-key employee_id --check-column emp_timestamp --target-dir /usr/hive/warehouse/external/empdata/ Otherwise, the way you are trying is the actually the way Cloudera recommends it.
... View more
09-08-2022
07:51 AM
Thanks for your quick reply @smruti , really appreciated. I have gone through this approach and will surely consider it for DR strategy.
... View more
08-31-2022
05:05 AM
@mohammad_shamim Did you have Hive HA configured in CDH cluster, in that case, you need to make sure that there are equal number of HS2 instances created in the CDP cluster, because without that HA cannot be attained. Also, make sure that there is no Hiveserver2 instance created under "Hive" service in CDP. It should only be present under Hive on Tez service.
... View more
08-24-2022
05:10 AM
@ssuja, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-08-2022
01:56 AM
I didn't notice that the property "external" is case sensitive,the step 2 should be ALTER TABLE alter_test SET TBLPROPERTIES('EXTERNAL'='false'); ,then the location would be changed in CDP7.1.1. And In CDP 7.1.7, It does not work even if I set property "TRANSLATED_TO_EXTERNAL" to true after creating table ,could you try the steps and give an attachment? thanks.
... View more
08-04-2022
02:11 PM
@Imran_chaush Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
07-15-2022
07:25 PM
Could help to explain what's the meaning of first 2 sentence? Set Hive..... What would happen if we don't have 2 sentences? How would that impact our query? Thanks.
... View more