Member since
12-17-2020
29
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
530 | 06-21-2023 12:37 AM | |
871 | 05-30-2023 11:08 AM | |
349 | 11-23-2022 01:09 AM | |
825 | 08-25-2022 05:48 AM | |
846 | 08-24-2022 12:36 AM |
06-21-2023
12:48 AM
Hello, I wanted to install Impala-shell for remote querying my CDP cluster, but looking with no much success I found the latest impala-shell client is in: archive.cloudera.com/p/cdh7/7.1.2.1/redhat7/yum nothing about 8 version, nothing about my CDP version: 7.1.8... Finally going with this "old" version but now I'm facing these issues: I already have python2 and 3 But not sure what more to try... no information at all in cloudera docs, even in apache impala web page 😞
... View more
Labels:
06-21-2023
12:37 AM
Hi all, The issue is not related to browsers, simply it has not the option to modify the queues... After many tests I managed to add the queues and resources to the ResourceManager Advanced config: [CM] > Yarn -> Configuration -> ResourceManager -> Fair Scheduler XML Advanced Configuration Snippet (Safety Valve) [CM] > Yarn -> Configuration -> ResourceManager -> yarn.resourcemanager.scheduler.class: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler by this way the queues are defined.
... View more
05-30-2023
07:25 AM
Hi again, you should be able to see the tablet size of every table (in the Kudu Tablet server UI): http://KUDUTABLET1:8050/tablets Then go to "Tablets" in the top menu and then you can search in the empty box your desired table. you will see all tablets (blocks) and some interesting information like Tablet ID, Partition, State, On-disk size and RaftConfig (Master) Then you can see how the tablets are more or less similar size.
... View more
05-30-2023
02:52 AM
Hi @yagoaparecidoti Here the key is not the table size but the tablets. One table of 50GB could have 50 tablets, then each tablet of 1GB (that's good) or One table of 50GB could have 2 tablets, then each tablet of 25GB (that's no so good: The recommended target size for tablets is under 10 GiB) you can take a look in your Kudu Master UI: http://Master:8051/tables and look for your tables and partitions (tablets). I'm using this chart to see the kudu table sizing in the clart builder: select total_kudu_on_disk_size_across_kudu_replicas where category=KUDU_TABLE
... View more
05-29-2023
07:24 AM
Good afternoon, I'm about to deploy CDP 7.1.8 and CP 7.7.1 but seems the licensing is going to take some time, my question is: is the Trial version same as licensed one? can I install Trial version and then update the cluster with the license? Question key: I'm about to build an HA environment, that means to deploy external Postgresql database, can I install Trial but to avoid using the embedded one? Many thanks in advance.
... View more
Labels:
11-23-2022
01:09 AM
Hello, I got the fix for this case, maybe this could help anyone having the same kudu Master consensus issue than me. Master1 is not voting: The consensus matrix is: Config source | Replicas | Current term | Config index | Committed? ---------------+--------------+--------------+--------------+------------ Master1 A | A B C | 12026 | -1 | Yes Master2 B | A B C* | 12026 | -1 | Yes Master3 C | A B C* | 12026 | -1 | Yes the workarround is: A)stop the problematic Master and run the below command on Problematic master B)sudo -u kudu kudu local_replica delete --fs_wal_dir=/var/kudu/master --fs_data_dirs=/var/kudu/master 00000000000000000000000000000000 -clean_unsafe C) Please check the kudu leader master with webUI a98a1f26d0254293b6e17e9daf8f6ef8 822fcc68eff448269c9200a8c4c2ecc8 LEADER 2022-11-22 07:18:21 GMT rpc_addresses { host: "sdzw-hpas-35" port: 7051 } http_addresses { host: "sdzw-hpas-35" port: 8051 } software_version: "kudu 1.13.0.7.1.6.0-297 (rev 9323384dbd925202032a965e955979d6d2f6acb0)" https_enabled: false D)sudo -u kudu kudu local_replica copy_from_remote --fs_wal_dir=/wal/kudu/wal --fs_data_dirs=/wal/kudu/data 00000000000000000000000000000000 <active_leader_fqdn>:7051 # sudo -u kudu /opt/cloudera/parcels/CDH-7.1.6-1.cdh7.1.6.p0.10506313/bin/../lib/kudu/bin/kudu local_replica copy_from_remote --fs_wal_dir=/var/kudu/master --fs_data_dirs=/var/kudu/master 00000000000000000000000000000000 sdzw-hpas-35.nrtsz.local:7051 E)stop remaining two masters F)then start all the three masters.
... View more
11-14-2022
03:33 AM
1 Kudo
Hello, did you tried to use a Loadbalancer like HAproxy? I'm using Postgresql as HA internal database but for sure you can setup with both connections. something like this: frontend hive bind *:10000 mode tcp option tcplog timeout client 50000 default_backend hive_backend backend hive_backend mode tcp balance source timeout connect 5000 timeout server 50000 server hiveserver1 Master1:10000 server hiveserver2 Master2:10000
... View more
11-12-2022
02:25 AM
Hello, In my 3Masters cluster, one Kudu Master is starting and stopping all the time, this is the Log detail from Cloudera Manager: Time Log Level Source Log Message 10:14:41.417 AM WARN cc:288 Found duplicates in --master_addresses: the unique set of addresses is Master1:7051, Master2:7051, Master3:7051 10:15:11.823 AM WARN cc:254 Call kudu.consensus.ConsensusService.RequestConsensusVote from 10.157.136.55:55402 (request call id 0) took 4542 ms (4.54 s). Client timeout 1775 ms (1.78 s) 10:15:11.823 AM WARN cc:254 Call kudu.consensus.ConsensusService.RequestConsensusVote from 10.157.136.37:59796 (request call id 0) took 30215 ms (30.2 s). Client timeout 9654 ms (9.65 s) 10:15:11.823 AM WARN cc:260 Trace: 1112 10:15:07.281146 (+ 0us) service_pool.cc:169] Inserting onto call queue 1112 10:15:07.281169 (+ 23us) service_pool.cc:228] Handling call 1112 10:15:11.823245 (+4542076us) inbound_call.cc:171] Queueing success response Metrics: {"spinlock_wait_cycles":384} 10:15:11.823 AM WARN cc:260 Trace: 1112 10:14:41.607787 (+ 0us) service_pool.cc:169] Inserting onto call queue 1112 10:14:41.607839 (+ 52us) service_pool.cc:228] Handling call 1112 10:15:11.823242 (+30215403us) inbound_call.cc:171] Queueing success response Metrics: {} 10:15:11.823 AM WARN cc:254 Call kudu.consensus.ConsensusService.RequestConsensusVote from 10.157.136.55:55402 (request call id 1) took 4536 ms (4.54 s). Client timeout 1955 ms (1.96 s) 10:15:11.823 AM WARN cc:260 Trace: 1112 10:15:07.286988 (+ 0us) service_pool.cc:169] Inserting onto call queue 1112 10:15:07.287025 (+ 37us) service_pool.cc:228] Handling call 1112 10:15:11.823244 (+4536219us) inbound_call.cc:171] Queueing success response Metrics: {} What does it means??? why is this so unconsistent?
... View more
Labels:
- Labels:
-
Apache Kudu
11-11-2022
03:54 AM
Hello, In a CDP 7.1.6 + Cloudera Manager 7.3.1 cluster. 3Masters +3Workers. I'm getting the error all the time: Corruption: master consensus error: there are master consensus conflicts This is the cluster ksck: Master Summary UUID | Address | Status ----------------------------------+--------------------------+--------- 5620e4a103894151b7bdee5e436f37d8 | master-2.local | HEALTHY 9cea3b56cc9b4be4846a02c0d89be753 | master-1.local | HEALTHY a98a1f26d0254293b6e17e9daf8f6ef8 | master-3.local | HEALTHY All reported replicas are: A = 9cea3b56cc9b4be4846a02c0d89be753 B = 5620e4a103894151b7bdee5e436f37d8 C = a98a1f26d0254293b6e17e9daf8f6ef8 The consensus matrix is: Config source | Replicas | Current term | Config index | Committed? ---------------+--------------+--------------+--------------+------------ A | A B C | 10120 | -1 | Yes B | A B* C | 10120 | -1 | Yes C | A B* C | 10120 | -1 | Yes It seems the A node is not voting, this is the log output: W1111 11:12:00.526211 18688 leader_election.cc:334] T 00000000000000000000000000000000 P 9cea3b56cc9b4be4846a02c0d89be753 [CANDIDATE]: Term 10122 pre-election: RPC error from VoteRequest() call to peer 5620e4a103894151b7bdee5e436f37d8 (master-2:7051): Network error: Client connection negotiation failed: client connection to 10.157.136.55:7051: connect: Connection refused (error 111) W1111 11:12:22.683107 18688 leader_election.cc:334] T 00000000000000000000000000000000 P 9cea3b56cc9b4be4846a02c0d89be753 [CANDIDATE]: Term 10122 pre-election: RPC error from VoteRequest() call to peer 5620e4a103894151b7bdee5e436f37d8 (master-2:7051): Timed out: RequestConsensusVote RPC to 10.157.136.55:7051 timed out after 7.916s (SENT) there is conectivity: # nc -z -v 10.157.136.55 7051 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 10.157.136.55:7051. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. and the masters have been restarted several times, and the whole cluster... Any idea to fix this? Thanks!
... View more
Labels:
- Labels:
-
Apache Kudu