Member since
12-17-2020
37
Posts
6
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
381 | 04-07-2024 10:23 AM | |
393 | 04-03-2024 07:23 AM | |
906 | 06-21-2023 12:37 AM | |
1644 | 05-30-2023 11:08 AM | |
528 | 11-23-2022 01:09 AM |
04-07-2024
10:23 AM
2 Kudos
Hello all, Finally found the root cause... /Etc/hosts 192.168.0.10 master1.customerdomain.com # Hostname Master1 This is the discrepancy, setting the hostname to the same as shown in /etc/hosts, the issue was solved. Command used: # hostnamectl set-hostname master1.customerdomain.com Then restarted the agent daemon and services in the cluster. Many thanks to the community for so many clues.
... View more
04-03-2024
05:51 PM
Hello @Juanes Yes agreed we will need to find out why that URL was wrong Anyways glad to know that the issue is resolved
... View more
04-02-2024
03:39 AM
1 Kudo
Hello Community, I would like to rise a big inconsistency about requirements. Starting by the Cloudera Support Matrix that says: CDP Private Cloud Base7.1.8 > will need Cloudera Manager 7.7.1 and OS: RHEL8.4 will work. Now, regarding to NTP, based on teh OS: "In RHEL 8, the NTP protocol is implemented only by the chronyd daemon, provided by the chrony package. The ntp daemon is no longer available. If you used ntp on your RHEL 7 system, you might need to migrate to chrony" in the other hand: https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/kudu-management/topics/kudu-server-management-limitations.html "Kudu releases have only been tested with NTP. Other time synchronization providers such as Chrony may not work." Could someone from Kudu team give some consistency here? RHEL8 not having ntpd anymore but mandatory for KUDU... Please help.
... View more
03-19-2024
09:27 AM
Hello @Juanes Yes, you shall use the Trial version and then update your license. Also you shall go with External DB instead of Embedded one. For more details related to license kindly take a look on this Product Documentation
... View more
06-22-2023
02:32 PM
@Juanes I believe you just need to resolve the missing dependency. Check out this solution: > pip3 install python-setuptools
> yum install impala-shell https://community.cloudera.com/t5/Support-Questions/How-to-install-impala-shell-on-RHEL-8-3-to-communicate-with/m-p/313665
... View more
06-21-2023
12:37 AM
Hi all, The issue is not related to browsers, simply it has not the option to modify the queues... After many tests I managed to add the queues and resources to the ResourceManager Advanced config: [CM] > Yarn -> Configuration -> ResourceManager -> Fair Scheduler XML Advanced Configuration Snippet (Safety Valve) [CM] > Yarn -> Configuration -> ResourceManager -> yarn.resourcemanager.scheduler.class: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler by this way the queues are defined.
... View more
05-30-2023
11:09 AM
OK! @Juanes 😉 thanks for the clarification.
... View more
11-23-2022
01:09 AM
Hello, I got the fix for this case, maybe this could help anyone having the same kudu Master consensus issue than me. Master1 is not voting: The consensus matrix is: Config source | Replicas | Current term | Config index | Committed? ---------------+--------------+--------------+--------------+------------ Master1 A | A B C | 12026 | -1 | Yes Master2 B | A B C* | 12026 | -1 | Yes Master3 C | A B C* | 12026 | -1 | Yes the workarround is: A)stop the problematic Master and run the below command on Problematic master B)sudo -u kudu kudu local_replica delete --fs_wal_dir=/var/kudu/master --fs_data_dirs=/var/kudu/master 00000000000000000000000000000000 -clean_unsafe C) Please check the kudu leader master with webUI a98a1f26d0254293b6e17e9daf8f6ef8 822fcc68eff448269c9200a8c4c2ecc8 LEADER 2022-11-22 07:18:21 GMT rpc_addresses { host: "sdzw-hpas-35" port: 7051 } http_addresses { host: "sdzw-hpas-35" port: 8051 } software_version: "kudu 1.13.0.7.1.6.0-297 (rev 9323384dbd925202032a965e955979d6d2f6acb0)" https_enabled: false D)sudo -u kudu kudu local_replica copy_from_remote --fs_wal_dir=/wal/kudu/wal --fs_data_dirs=/wal/kudu/data 00000000000000000000000000000000 <active_leader_fqdn>:7051 # sudo -u kudu /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/bin/../lib/kudu/bin/kudu local_replica copy_from_remote --fs_wal_dir=/var/kudu/master --fs_data_dirs=/var/kudu/master 00000000000000000000000000000000 sdzw-hpas-35.nrtsz.local:7051 E)stop remaining two masters F)then start all the three masters.
... View more
11-14-2022
03:44 AM
1 Kudo
@KPG1 I can think of Oracle database ha a feature i.e Oracle RAC,where we give SCAN address as the jdbc string to metastore,so the scan ip wll take care of loadbalancing. You need to check with mysql vendor(Oracle),if they have similar thing as of Oracle database. You can have HMS HA but all the HMS instance point to same database, you can not have 2 HMS instance pointing to a different database. Please let me know,if you have any querties and please "Accept As Solution", if your queries are answered
... View more
11-11-2022
03:54 AM
Hello, In a CDP 7.1.6 + Cloudera Manager 7.3.1 cluster. 3Masters +3Workers. I'm getting the error all the time: Corruption: master consensus error: there are master consensus conflicts This is the cluster ksck: Master Summary UUID | Address | Status ----------------------------------+--------------------------+--------- 5620e4a103894151b7bdee5e436f37d8 | master-2.local | HEALTHY 9cea3b56cc9b4be4846a02c0d89be753 | master-1.local | HEALTHY a98a1f26d0254293b6e17e9daf8f6ef8 | master-3.local | HEALTHY All reported replicas are: A = 9cea3b56cc9b4be4846a02c0d89be753 B = 5620e4a103894151b7bdee5e436f37d8 C = a98a1f26d0254293b6e17e9daf8f6ef8 The consensus matrix is: Config source | Replicas | Current term | Config index | Committed? ---------------+--------------+--------------+--------------+------------ A | A B C | 10120 | -1 | Yes B | A B* C | 10120 | -1 | Yes C | A B* C | 10120 | -1 | Yes It seems the A node is not voting, this is the log output: W1111 11:12:00.526211 18688 leader_election.cc:334] T 00000000000000000000000000000000 P 9cea3b56cc9b4be4846a02c0d89be753 [CANDIDATE]: Term 10122 pre-election: RPC error from VoteRequest() call to peer 5620e4a103894151b7bdee5e436f37d8 (master-2:7051): Network error: Client connection negotiation failed: client connection to 10.157.136.55:7051: connect: Connection refused (error 111) W1111 11:12:22.683107 18688 leader_election.cc:334] T 00000000000000000000000000000000 P 9cea3b56cc9b4be4846a02c0d89be753 [CANDIDATE]: Term 10122 pre-election: RPC error from VoteRequest() call to peer 5620e4a103894151b7bdee5e436f37d8 (master-2:7051): Timed out: RequestConsensusVote RPC to 10.157.136.55:7051 timed out after 7.916s (SENT) there is conectivity: # nc -z -v 10.157.136.55 7051 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 10.157.136.55:7051. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. and the masters have been restarted several times, and the whole cluster... Any idea to fix this? Thanks!
... View more
Labels:
- Labels:
-
Apache Kudu