Member since
05-19-2020
23
Posts
10
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3745 | 10-15-2022 01:58 AM | |
830 | 05-31-2022 01:41 AM | |
1986 | 03-10-2022 02:17 AM |
10-15-2022
01:58 AM
Hello @hanumanth In addition to @mszurap response for HMS heap Size tuning please consider below document and make sure the underlying host is having enough memory else you will end up with memory overcommit issue. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_hive_tuning.html
... View more
05-31-2022
01:41 AM
1 Kudo
Hello @Amn_468 Yes CDH 6.3.3 Supports Oracle 19. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_database_requirements.html Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-22-2022
10:41 AM
7 Kudos
Steps to connect from Legacy Cluster to CDP Public Cloud Hive:
Build a connection string as follows using Hive Server Endpoint from DataHub by just adding the sslTrustStore and trustStorePassword along with workload username and password for PAM authentication. jdbc:hive2://master0.repro.cloudera/default;transportMode=http;httpPath=new/cdp-proxy-api/hive;ssl=true;sslTrustStore=gateway-client-trust.jks;trustStorePassword=*****;user=chella;
Get the JKS file downloaded from CDP Knox (Token Integration) and copy it to HDP/CDH host.
Get the latest JDBC jar "hive-jdbc-3.1.0-SNAPSHOT-standalone.jar" from CDW. CDP Control Plane > Management Console > Data Warehouse > click the 3 dots from any of Hive VW and "Download JDBC JAR"
Copy the jar "hive-jdbc-3.1.0-SNAPSHOT-standalone.jar" to HDP/CDH host.
Get into beeline from HDP/CDH host
Add the jar using the following command Beeline version 3.1.0.3.1.5.26-1 by Apache Hive
0: jdbc:hive2://c1544-node2> !addlocaldriverjar /root/hive-jdbc-3.1.0-SNAPSHOT-standalone.jar
scan complete in 22ms
Connect using the string we built in step [1] 0: jdbc:hive2://c1544-node2> !connect jdbc:hive2://master0.repro.cloudera/default;transportMode=http;httpPath=new/cdp-proxy-api/hive;ssl=true;sslTrustStore=gateway-client-trust.jks;trustStorePassword=*****;user=chella;
Connecting to jdbc:hive2://master0.repro.cloudera/default;transportMode=http;httpPath=new/cdp-proxy-api/hive;ssl=true;sslTrustStore=gateway-client-trust.jks;trustStorePassword=*****;user=chella;
Enter password for jdbc:hive2://master0.repro.cloudera/default: *************
Connected to: Apache Hive (version 3.1.3000.7.2.10.6-1)
Driver: Hive JDBC (version 3.1.0.3.1.5.26-1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
1: jdbc:hive2://master0.repro>
... View more
03-14-2022
09:14 PM
2 Kudos
Perform the following steps to access the Hive MetaStore database :- SSH to the Data lake CM Node as 'cloudbreak' user. Run 'sudo -i' to gain root access. source activate_salt_env export PGPASSWORD=$(salt-call pillar.get postgres:hive:password 2>/dev/null| tail -n 1 | awk '{print $1}') HIVE_DB=$(salt-call pillar.get postgres:hive:database 2>/dev/null| tail -n 1 | awk '{print $1}') HIVE_DB_USER=$(salt-call pillar.get postgres:hive:user 2>/dev/null| tail -n 1 | awk '{print $1}') psql -U ${HIVE_DB_USER} -d ${HIVE_DB} e.g. [root@xxxx-master0 ~]# source activate_salt_env
(salt_3001.8) [root@xxxx-master0 ~]# export PGPASSWORD=$(salt-call pillar.get postgres:hive:password 2>/dev/null| tail -n 1 | awk '{print $1}')
(salt_3001.8) [root@xxxx-master0 ~]# HIVE_DB=$(salt-call pillar.get postgres:hive:database 2>/dev/null| tail -n 1 | awk '{print $1}')
(salt_3001.8) [root@xxxx-master0 ~]# HIVE_DB_USER=$(salt-call pillar.get postgres:hive:user 2>/dev/null| tail -n 1 | awk '{print $1}')
(salt_3001.8) [root@xxxx-master0 ~]# psql -U ${HIVE_DB_USER} -d ${HIVE_DB} -c "SELECT version();"
version
----------------------------------------------------------------------------------------------------------
PostgreSQL 14.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
(1 row)
... View more
03-10-2022
02:17 AM
Hello @Rajeshhadoop Do we have any changes regarding External or internal tables? Yes in Hive3 by default all Internal/Managed tables are ACID and External tables are Non-Transactional/Non-ACID tables. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/using-hiveql/topics/hive_hive_3_tables.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/securing-hive/topics/hive_external_table_access.html Any limitations for no.of rows/partitions or code/syntax changes No. In Hive 3 you will many additional features. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade/topics/hive-apache-hive-3-architectural-overview.html https://docs.cloudera.com/runtime/7.2.6/hive-introduction/topics/hive_whats_new_in_this_release_hive.html Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-01-2022
06:15 AM
Hello Rajesh, We understand that you are planning to migrate to Hive3 on CDH and CDP private base and you are looking for feature wise changes happened from hive2 to hive3. You would also like to know about the any potential risk areas. Please review the below articles and let us know if that helps. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/ug_cdh_hive_preupgrade_tasks.html https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/hive-introduction/topics/hive-apache-hive-3-architectural-overview.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.1/using-hiveql/content/hive_hive_3_tables.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/hive-overview/content/hive_upgrade_changes.html https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/hive-unsupported.html? Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-06-2021
08:34 AM
@vincentD I hope it helped to resolve your issue, Regarding your query on upgrading CDH 6.3.4 to new software version. Please explore the below CDP Path. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-release-notes-links.html https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cm-cdh-runtime-versions.html https://docs.cloudera.com/cdp-private-cloud/latest/release-summaries/topics/announcement-202108-717.html https://docs.cloudera.com/cdp-private-cloud/latest/index.html Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-24-2021
06:57 AM
@vincentD To suppress this error, update the /etc/cloudera-scm-agent/config.ini on each of the hosts affected to: 1. In the line that lists the nodev filesystems: # The list of non-device (nodev) filesystem types which will be monitored. monitored_nodev_filesystem_types=nfs,nfs4,tmpfs 2. Please change the uncommented line to read as : monitored_nodev_filesystem_types=nfs,nfs4 3. Restart the cloudera manager agent with: service cloudera-scm-agent restart Hope this helps. Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-24-2021
12:46 AM
@Chetankumar Have you tried to restart the cloudera-scm-agent and cloudera-scm-server from CM Host. If not it's worth trying to see if it picks the custom Spark2 parcel in CM UI. Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-24-2021
12:28 AM
@golibrary It looks it fails with authentication. Please try with "AuthMech" property based on your Cluster setup. Using No Authentication To use no authentication: Set the AuthMech property to 0 For example: jdbc:impala://localhost:21050;AuthMech=0 Using Kerberos For information on operating Kerberos, refer to the documentation for your operating system. To configure the Cloudera JDBC Driver for Impala to use Kerberos authentication: Set the AuthMech property to 1. If your Kerberos setup does not define a default realm or if the realm of your Impala server is not the default, then set the appropriate realm using the KrbRealm property. Set the KrbHostFQDN property to the fully qualified domain name of the Impala host. Set the KrbServiceName property to the service name of the Impala server. For example: jdbc:impala://localhost:21050;AuthMech=1;KrbRealm=EXAMPLE.COM;Krb HostFQDN=impala.example.com;KrbServiceName=impala Please refer the below document Check Authentication informations on page number -> 13 & 14 https://docs.cloudera.com/documentation/other/connectors/impala-jdbc/2-5-24/Cloudera-JDBC-Driver-for-Impala-Install-Guide-2-5-24.pdf Cheers! Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more