Member since
11-18-2020
5
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1427 | 10-09-2022 09:59 AM |
09-06-2023
03:04 AM
1 Kudo
After updating from CM 7.6.1 to CM 7.7.1, we are unable to connect the multiple PQS with the KNOX using the ODBC, Always getting the 500 error. But there is no issues when we are using the JDBC.
The cause of the issue is due to the topology change after the CM upgrade, for services such as HIVE and AVATICA.
--------------------------------------------------------------------------------------
from:
<value>enabled=true;maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;</value>
to:
<value>enableStickySession=true;noFallback=true;enableLoadBalancing=true</value>
----------------------------------------------------------------------------------------
This is in order to support the Knox load balancer feature.
And we followed the below steps to revert the changes for existing cluster.
Knox --> Configuration --> Knox Gateway Advanced Configuration Snippet (Safety Valve) for conf/cdp-resources.xml
<name>providerConfigs:pam</name>
<value>role=ha#ha.name=HaProvider#ha.param.HIVE=enabled=true;maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;#ha.param.AVATICA=enabled=true;maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;</value>
Now the services are working, so far so good.
DISCLAIMER:
This article is contributed by an external user. The steps may not be verified by Cloudera and may not be applicable for all use cases and may be very specific to a particular distribution. Please follow with caution and at your own risk. If needed, raise a support case to get confirmation.
... View more
Labels:
11-05-2022
11:40 AM
You need to troubleshoot it below way. 1. execute the select query. Please check few sample data. 2. try to create table using 10 rows. 3. if it is okay, you have to check the rest of the data quality.
... View more
11-05-2022
11:32 AM
Can you check with this solution? https://community.cloudera.com/t5/Support-Questions/zookeeper-server-Unable-to-load-database-on-disk/td-p/283400
... View more
10-09-2022
09:59 AM
What's the value for ranger.usersync.deletes.frequency ? You can set the value 1 or 2 and check the user status again.
... View more
10-08-2022
11:55 PM
We have newly install CDP-DC 7.1.7 and try to load data using a spark jobs but getting the errors. ----Payload java -cp /etc/hadoop/conf.cloudera.hdfs/ssl-client.xml:/etc/hbase/conf.cloudera.hbase/hbase-site.xml:/etc/hadoop/conf.cloudera.hdfs/core-site.xml:/etc/hadoop/conf.cloudera.hdfs/hdfs-site.xm:/data/scripts/LeaApp-1.0-SNAPSHOT.jar net.ba.lea.transformation.FileActions "/data/scripts/msc/IN/" "/tmp/nss_processing/" "/data/scripts/msc/reject/" "250" "LEA.DBM_CDR_FILE_HEAD" "NSS" "jdbc:phoenix:gzvlcdpnode01.ba.net:2181:/hbase:phoenix/gzvlcdpnode02@BA.NET:/etc/security/keytab/phoenix.keytab" error : Can't find method newStub in org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService! -------------------------------- But the same code is working fine with CDP-DC 7.1.3. We have changed the version only. Please guide us .
... View more
Labels:
- Labels:
-
Apache Phoenix