Member since
10-28-2020
578
Posts
46
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
734 | 02-17-2025 06:54 AM | |
4981 | 07-23-2024 11:49 PM | |
850 | 05-28-2024 11:06 AM | |
1414 | 05-05-2024 01:27 PM | |
893 | 05-05-2024 01:09 PM |
07-24-2023
10:22 AM
@novice_tester Could you please make it a bit clearer for us? What are the DDLs(create table command) you used to create the managed table, and the external table? Creating a managed table in any location outside of 'hive.metastore.warehouse.dir' path should prompt the following error: A managed table's location should be located within managed warehouse root directory or within its database's managedLocationUri. /warehouse/tablespace/managed/hive/ seems like the warehouse directory for the external tables. So, I doubt creating the managed table picked this location on its own. Could you also share the outputs of the following commands from beeline: beeline> set hive.metastore.warehouse.dir;
beeline> hive.metastore.warehouse.external.dir; An external table can be created with the LOCATION clause, and we can set any path w/ it. Refer to this Cloudera Doc.
... View more
07-04-2023
11:55 AM
@Mannoj RDBMS HA for metastore is not officially supported yet. We have a knowledge article on the same. Cloudera's statement on RDBMS HA can be found here.
... View more
07-03-2023
01:35 PM
@vaibhavgokhale You could try: --conf spark.sql.hive.conf.list="tez.queue.name=queue1"
... View more
06-28-2023
11:31 AM
@Choolake Try this: count1=$(beeline -u "jdbc:hive2://dev-lisa.realm.com:10000/default;principal=hive/dev-lisa.intranet.slt.com.lk@REALM.COM;ssl=true;sslTrustStore=/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_truststore.jks" --showHeader=false --silent=true --outputformat=tsv2 -e 'SELECT count(*) from table_name;')
These beeline flags will remove all the unnecessary texts from the stdout. Compute count2 the same way.
... View more
06-26-2023
06:17 AM
@aafc could you please share the CDP (HIve) version you are using? Also, have you tried with the latest Clouldera JDBC driver for hive?
... View more
06-24-2023
11:53 AM
@rahuledavalath In a 3-node ZK cluster, we can only afford to lose 1 instance/node at once. However, having this cluster across two geo-locations could put us in a difficult position. So, it will be a good idea to build the cluster across 3 regions, with one instance in each.
... View more
06-21-2023
06:39 AM
@Choolake Try : entries=$((count2-count1)) This should work provided we have valid values on both variables.
... View more
06-14-2023
10:47 PM
@xiamu this error could appear if the data nodes are not healthy. Does the job fail repeatedly, or it succeeds at times? Have you tried running it with a different user? This is where it is failing: private void setupPipelineForAppendOrRecovery() throws IOException {
// Check number of datanodes. Note that if there is no healthy datanode,
// this must be internal error because we mark external error in striped
// outputstream only when all the streamers are in the DATA_STREAMING stage
if (nodes == null || nodes.length == 0) {
String msg = "Could not get block locations. " + "Source file \""
+ src + "\" - Aborting..." + this;
LOG.warn(msg);
lastException.set(new IOException(msg));
streamerClosed = true;
return;
}
setupPipelineInternal(nodes, storageTypes, storageIDs);
}
... View more
06-14-2023
10:21 PM
@haihua Do you mean it works from beeline but not from Hive CLI? If it works with beeline, why don't we run it run it with that instead? beeline ... -f query.hql Also, could you try providing required privileges to the role with GRANT OPTION ? Refer to https://docs.cloudera.com/documentation/enterprise/latest/topics/sg_hive_sql.html#grant_privilege_with_grant
... View more
06-09-2023
08:47 AM
@snm1523 From beeline use sys;
select cd_id, count(cd_id) as column_count from columns_v2 group by cd_id order by cd_id asc; -- this will return column_count for each table Every individual table will have a unique cd_id. To map the table names with cd_id, try the following. select t.tbl_name, s.cd_id from tbls t join sds s where t.sd_id=s.sd_id order by s.cd_id asc; You could also merge the two queries to get the o/p together.
... View more