Member since
05-15-2019
303
Posts
7
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1871 | 10-22-2024 04:07 AM | |
3949 | 10-16-2024 12:56 PM | |
1010 | 06-08-2022 10:49 AM |
06-04-2025
11:01 AM
@NadirHamburg Can you elaborate on what you are looking for from Cloudera Support? zit seems you are asking a question related to xml storage files which is not related to any of our Cloudera Products. Did you mean to submit this questions to Microsoft? Thank you
... View more
04-02-2025
08:37 AM
@mikecolux If your job requires hive-site.xml, it is not necessary to copy the file to /etc/spark/conf. Instead, you can try exporting the following command, which will allow hive-site.xml to be picked from /etc/hive/conf whenever needed: export HADOOP_CONF_DIR=$HADOOP_CONF_DIR:/etc/hadoop/conf:/etc/spark/conf/yarn-conf/*:/etc/hive/conf You can test this approach in a specific job or session, and once it works, you can update the Spark configuration accordingly. CM > Spark > Configuration >Spark Client Advanced Configuration Snippet (Safety Valve) for spark-conf/spark-env.sh
... View more
04-01-2025
08:44 AM
2 Kudos
Hello @mikecolux , If you are using CDP and have Hive on Tez set up on the cluster then the Hive on Tez service will take care of this for you, and you will not need to configure anything manually.
... View more
01-03-2025
07:50 AM
1 Kudo
@yusufu Can you tell me the CDH or CDP version you are currently on with this issue?
... View more
10-22-2024
04:07 AM
@jayes Unfortunately there is no compression setting for the Hive Export. This feature was introduced when Hive CLI was used in the HDP days of Hortonworks. You will need to create your tables with compression enabled, and in your case you will need to either do one of the following. Alter the table and add compression to the table properties, and then do an insert overwrite to the table to compress it. Or create a new table with compression added to the table properties, and then insert the data from the old table into the new one. I would recommend using the Snappy Compression.
... View more
10-18-2024
05:47 AM
Please see the accepted compression formats supported below. https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/managing-clusters/topics/cm-choosing-configuring-data-compression.html When exporting in hive it will compress the data.
... View more
10-16-2024
01:44 PM
1 Kudo
@Arathi Can you please open a case on the Cloudera Support Portal. Please attach the application log, hiveserver2 logs from the time period the job failed, and the beeline console output from the failed query.
... View more
10-16-2024
01:37 PM
@jayes Unfortunately the Hive Import/Export is only supported for HDFS. The only method I know of to get the table and data into S3 is as follows, see example below. You need to create a table that is mapped onto S3 bucket and directory CREATE TABLE tests3 ( id BIGINT, time STRING, log STRING ) row format delimited fields terminated by ',' lines terminated by '\n' STORED AS TEXTFILE LOCATION 's3n://bucket/directory/'; Insert data into s3 table and when the insert is complete the directory will have a csv file INSERT OVERWRITE TABLE tests3 select id, time, log from testcsvimport;
... View more
10-16-2024
01:25 PM
@allen_chu This looks like a Yarn Resource issue. I would recommend opening a case in the Cloudera Support Portal under the Yarn Component to get further assistance with this.
... View more
10-16-2024
12:56 PM
1 Kudo
Hello @Patriciabqc It seems this was fixed in CDP-7.1.8 according to the TSB, please see the Knowledge base link below. https://my.cloudera.com/knowledge/TSB-2022-600-Renaming-translated-external-partition-table?id=353902
... View more