Member since
10-28-2020
553
Posts
44
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3550 | 07-23-2024 11:49 PM | |
501 | 05-28-2024 11:06 AM | |
880 | 05-05-2024 01:27 PM | |
566 | 05-05-2024 01:09 PM | |
600 | 03-28-2024 09:51 AM |
05-24-2024
03:02 AM
1 Kudo
@hadoopranger Make sure that [1] Hive namespace "hiveserver2" is correct, [2] You have a valid kerberos ticket in the client machine.
... View more
05-08-2024
06:03 AM
@shofialau great! Now we can download mysql jdbc connector and place in the HMS node under the location /usr/share/java Detailed information can be found here - https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-hdp/topics/amb-install-mysql-jdbc.html#pnavId2
... View more
05-07-2024
07:55 AM
@shofialau Let's not get confused with that JDBC driver mentioned in the doc. We can ignore that for now. We have two options relating to the metastore DB situation. Recover the password of the hive metastore DB user. Create a new database and user. I had shared the command to reset the DB user password(Option - 1) , or if you plan to create a new database(option -2), try the following commands: mysql> CREATE DATABASE hmsdb;
mysql> CREATE USER 'hiveuser'@'metastorehost' IDENTIFIED BY 'mypassword';
mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hiveuser'@'metastorehost';
mysql> GRANT ALL PRIVILEGES ON hmsdb.* TO 'hiveuser'@'metastorehost';
mysql> FLUSH PRIVILEGES;
mysql> quit; Replace the "metastorehost" with the Hive metastore node hostname.
... View more
05-05-2024
01:27 PM
@shofialau First of all, you could always create a new database and user for Hive, and all other services. Refer to step 7 of https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/hive-metastore/topics/hive-mysql_.html If you want to reuse the existing DB and user, you should be able to reset the user password in mysql. Connect to mysql as root user, and run the command as follows: UPDATE mysql.user SET authentication_string = PASSWORD('new_password') WHERE User = 'hive' AND Host = 'metastorehost'; In case you are not sure, or you do not have root privileges, this can be reset as well: ref: https://dev.mysql.com/doc/refman/8.0/en/resetting-permissions.html
... View more
05-05-2024
01:09 PM
@jayes Instead of copying the data using hdfs copy command, could you try EXPORT TABLE? e.g. EXPORT TABLE test1 to '/jay123/restore_jobid123243/jayesh/' Then run your IMPORT command. use jayesh; import table test1 from '/jay123/restore_jobid123243/jayesh/' Import table command requires metadata along with table data which gets dumped only when you use EXPORT command.
... View more
05-01-2024
05:57 AM
@MorganMcEvoy what is cluster-head.domain.com? Is this a load balancer or an individual HS2 node? Also, what's the client tool you are using? Is it possible that it is not honoring the sslTrustStore parameter? A workaround would be to import the root ca cert into the default java truststore in the client machine. ref: https://stackoverflow.com/questions/11700132/how-to-import-a-jks-certificate-in-java-trust-store
... View more
04-30-2024
11:28 AM
1 Kudo
@MorganMcEvoy We have two different issues here: 1. ClassNotFoundException: com.ctc.wstx.io.InputBootstrapper This seems to be due to an incompatible version of the client, where which is using some hadoop libraries that are not compatible with your version of Hive. 2. SunCertPathBuilderException: unable to find valid certification path to requested target This basically means, you need to specify the truststore file(that contains the root cert of the CA) in the connection string. In case the application you are using is equivalent to beeline, I think adding the following to the connection string should work: ;ssl=true;sslTrustStore=/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_truststore.jks;trustStorePassword=... SSL=1 might not work because it's is used with Cloudera and Apache JDBC driver. In case you have tried this already and it still fails, could you share the error message?
... View more
04-11-2024
05:48 AM
JDBC/ODBC Drivers for Hive can be downloaded from Cloudera website. The first thing we need to collect is the Hive Endpoint from Cloudera Management console. This can be found at the bottom of in the specific DataHub window. It will be in the following format: jdbc:hive2://datahub1-master0.geo-1035.lskx-pvue.a4.cloudera.site/;ssl=true;transportMode=http;httpPath=datahub1/cdp-proxy-api/hive Majorly we need to furnish the following information in the appropriate fields: Host(s) : datahub1-master0.geo-1035.lskx-pvue.a4.cloudera.site Port : 443 Authentication Mechanism : Username/Password Thrift Transport : HTTP Go to HTTP Options : HTTP Path : datahub1/cdp-proxy-api/hive (you will get this info from the Hive Endpoint) Go to SSL Options: 6.1. Check Enable SSL 6.2. Check Allow Self-signed Server Certificate check box 6.3. Trusted Certificates: Select the path to the PEM file containing the root ca cert of the Knox gateway. Note: You can download the TLS Public Certificate from Data Hub > Token Integration > TLS Public Certificate > Download the PEM file. Save and Test Connection.
... View more
03-28-2024
09:51 AM
1 Kudo
@hegdemahendra You may try Cloudera Hive JDBC Driver. The driver class name would be "com.cloudera.hive.jdbc.HS2Driver".
... View more
03-20-2024
03:54 AM
2 Kudos
@Choolake See if this does the job for you. ...
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde'
WITH SERDEPROPERTIES (
'separatorChar' = '|',
'quoteChar' = '"'
)
STORED AS TEXTFILE LOCATION .... This is a third party serde. You may download it from https://code.google.com/archive/p/hive-json-serde/downloads
... View more