Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1346 | 11-26-2019 11:47 PM | |
1304 | 11-25-2019 11:44 AM | |
9474 | 08-07-2019 12:48 AM | |
2178 | 04-17-2019 03:09 AM | |
3487 | 02-18-2019 12:23 AM |
06-19-2018
02:00 AM
1 Kudo
You did not mentioned the version of CDH. But I think the problem is that spark launches many executors to read, and those executors are not co-located with the Kudu tablet servers. I dont know if you are just reading/filtering the data, or reading and writing into parquet - it depends how the spark job is executed. What I also noticed, that running multiple spark jobs agains the same table (with different partitions) did not help either.
... View more
06-04-2018
10:54 AM
1 Kudo
Applying patch - i.e. upgrading to 5.13.3 solved that problem
... View more
06-04-2018
05:59 AM
We were getting soft limit hit during heavy inserts into a Kudu table. Tablet servers having ~700 tablets, but I thought it is ok, because the official docs says about getting it under 2000 per tablet server.
... View more
06-04-2018
05:58 AM
I missed this in the known issues, related to this bug: https://issues.apache.org/jira/browse/KUDU-2233
... View more
06-04-2018
05:03 AM
ksck reports multiple tables as unavailable, example from the ksck output: Table impala::work.sales_by_year is HEALTHY (3 tablet(s) checked)
Table impala::work.test_kudu is HEALTHY (2 tablet(s) checked)
Tablet db29eaba152d44ab868facb8002f3cfb of table 'impala::base.tech_kudu' is unavailable: 3 replica(s) not RUNNING
1b1a918b30e84678a85a5dc5a25b9915 (ip-10-197-15-241.eu-west-1.compute.internal:7050): TS unavailable
88df884001914dcb959fa336e4b28917 (ip-10-197-12-156.eu-west-1.compute.internal:7050): TS unavailable [LEADER]
f40241678ffd4ebaa8dcc39c58c1c0a7 (ip-10-197-22-10.eu-west-1.compute.internal:7050): TS unavailable
Tablet 3cf0682e62ed453881c3a089e2ccadc8 of table 'impala::base.tech_kudu' is unavailable: 2 replica(s) not RUNNING
1b1a918b30e84678a85a5dc5a25b9915 (ip-10-197-15-241.eu-west-1.compute.internal:7050): TS unavailable
d8cb801ff85e4164905078cba52764fc (ip-10-197-27-68.eu-west-1.compute.internal:7050): RUNNING [LEADER]
f40241678ffd4ebaa8dcc39c58c1c0a7 (ip-10-197-22-10.eu-west-1.compute.internal:7050): TS unavailable
1 replicas' active configs differ from the master's.
All the peers reported by the master and tablet servers are:
A = 1b1a918b30e84678a85a5dc5a25b9915
B = d8cb801ff85e4164905078cba52764fc
C = f40241678ffd4ebaa8dcc39c58c1c0a7
The consensus matrix is:
Config source | Voters | Current term | Config index | Committed?
---------------+------------------------+--------------+--------------+------------
master | A B* C | | | Yes
A | [config not available] | | |
B | B* C | 11 | 600 | No
C | [config not available] | | |
... View more
06-04-2018
04:38 AM
Hi, after restarting the Kudu service 3 out of 10 servers failed to start - i.e. the tablet servers boots up but then reports warning about failing to connect and "Election decided. Result: candidate lost". One error is found in the logs: Check failed: pv_delete_redo != nullptr All the 7 other replicas are running fine, but these three fails to join the cluster. Prior the restart everything was ok, the tablet servers were under heavy write load, so a memory was increased for absorbing more writes. When I restart, the 3 failed tablet servers come up, stays in green for a minute, and then fails. I0604 13:04:54.221642 109741 leader_election.cc:248] T 4fae2cca875846c2be4d5263a92d3925 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 7 pre-election: Election decided. Result: candidate lost.
I0604 13:04:54.221681 109975 raft_consensus.cc:2456] T 4fae2cca875846c2be4d5263a92d3925 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 15.890s (election complete)
I0604 13:04:54.221699 109975 raft_consensus.cc:2216] T 4fae2cca875846c2be4d5263a92d3925 P 1b1a918b30e84678a85a5dc5a25b9915 [term 6 FOLLOWER]: Leader pre-election lost for term 7. Reason: could not achieve majority
I0604 13:04:54.221755 109742 leader_election.cc:367] T edf86853a720499980ba3dca0e595700 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 18 pre-election: Vote denied by peer f443865dd251425691c7721fccf0c9f0. Message: Invalid argument: T edf86853a720499980ba3dca0e595700 P f443865dd251425691c7721fccf0c9f0 [term 17 LEADER]: Leader pre-election vote request: Denying vote to candidate 1b1a918b30e84678a85a5dc5a25b9915 for term 18 because replica is either leader or believes a valid leader to be alive.
I0604 13:04:54.221778 109742 leader_election.cc:248] T edf86853a720499980ba3dca0e595700 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 18 pre-election: Election decided. Result: candidate lost.
I0604 13:04:54.221832 109755 raft_consensus.cc:2456] T edf86853a720499980ba3dca0e595700 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 15.802s (election complete)
I0604 13:04:54.221848 109755 raft_consensus.cc:2216] T edf86853a720499980ba3dca0e595700 P 1b1a918b30e84678a85a5dc5a25b9915 [term 17 FOLLOWER]: Leader pre-election lost for term 18. Reason: could not achieve majority
I0604 13:04:54.235733 110586 raft_consensus.cc:412] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [term 5 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.235754 110586 raft_consensus.cc:2456] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 3.291s (starting election)
I0604 13:04:54.235787 110586 raft_consensus.cc:434] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [term 5 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "c70b96156f18461792d6f2c583f42e89" member_type: VOTER last_known_addr { host: "ip-10-197-13-169.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "f40241678ffd4ebaa8dcc39c58c1c0a7" member_type: VOTER last_known_addr { host: "ip-10-197-22-10.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.236621 110586 leader_election.cc:221] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 6 pre-election: Requesting pre-vote from peer c70b96156f18461792d6f2c583f42e89
I0604 13:04:54.236646 110586 leader_election.cc:221] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 6 pre-election: Requesting pre-vote from peer f40241678ffd4ebaa8dcc39c58c1c0a7
W0604 13:04:54.236948 109744 leader_election.cc:277] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 6 pre-election: RPC error from VoteRequest() call to peer f40241678ffd4ebaa8dcc39c58c1c0a7: Network error: Client connection negotiation failed: client connection to 10.197.22.10:7050: connect: Connection refused (error 111)
I0604 13:04:54.237129 109742 leader_election.cc:367] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 6 pre-election: Vote denied by peer c70b96156f18461792d6f2c583f42e89. Message: Invalid argument: T 4805e6e61c4e4c27b76af4117f45e631 P c70b96156f18461792d6f2c583f42e89 [term 5 LEADER]: Leader pre-election vote request: Denying vote to candidate 1b1a918b30e84678a85a5dc5a25b9915 for term 6 because replica is either leader or believes a valid leader to be alive.
I0604 13:04:54.237195 109742 leader_election.cc:248] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 6 pre-election: Election decided. Result: candidate lost.
I0604 13:04:54.237241 109996 raft_consensus.cc:2456] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 11.984s (election complete)
I0604 13:04:54.237262 109996 raft_consensus.cc:2216] T 4805e6e61c4e4c27b76af4117f45e631 P 1b1a918b30e84678a85a5dc5a25b9915 [term 5 FOLLOWER]: Leader pre-election lost for term 6. Reason: could not achieve majority
W0604 13:04:54.238286 109744 consensus_peers.cc:411] T 64bb8b5844d1415ba33870389ddce39d P 1b1a918b30e84678a85a5dc5a25b9915 -> Peer f40241678ffd4ebaa8dcc39c58c1c0a7 (ip-10-197-22-10.eu-west-1.compute.internal:7050): Couldn't send request to peer f40241678ffd4ebaa8dcc39c58c1c0a7 for tablet 64bb8b5844d1415ba33870389ddce39d. Status: Network error: Client connection negotiation failed: client connection to 10.197.22.10:7050: connect: Connection refused (error 111). Retrying in the next heartbeat period. Already tried 47 times.
F0604 13:04:54.254616 104939 compaction.cc:777] Check failed: pv_delete_redo != nullptr
I0604 13:04:54.328428 109996 raft_consensus.cc:412] T d2696e295b7e479f91d9ddd7f236993e P 1b1a918b30e84678a85a5dc5a25b9915 [term 8 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.330035 109996 raft_consensus.cc:2456] T d2696e295b7e479f91d9ddd7f236993e P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 1.877s (starting election)
I0604 13:04:54.332237 109996 raft_consensus.cc:434] T d2696e295b7e479f91d9ddd7f236993e P 1b1a918b30e84678a85a5dc5a25b9915 [term 8 FOLLOWER]: Starting pre-election with config: opid_index: 8 OBSOLETE_local: false peers { permanent_uuid: "f443865dd251425691c7721fccf0c9f0" member_type: VOTER last_known_addr { host: "ip-10-197-18-181.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "88df884001914dcb959fa336e4b28917" member_type: VOTER last_known_addr { host: "ip-10-197-12-156.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.335131 109996 leader_election.cc:221] T d2696e295b7e479f91d9ddd7f236993e P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 9 pre-election: Requesting pre-vote from peer f443865dd251425691c7721fccf0c9f0
I0604 13:04:54.335207 109996 leader_election.cc:221] T d2696e295b7e479f91d9ddd7f236993e P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 9 pre-election: Requesting pre-vote from peer 88df884001914dcb959fa336e4b28917
I0604 13:04:54.639492 110868 raft_consensus.cc:412] T 2919bf85fc5041c7bab2aff27828cf0d P 1b1a918b30e84678a85a5dc5a25b9915 [term 4 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.639973 110868 raft_consensus.cc:2456] T 2919bf85fc5041c7bab2aff27828cf0d P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 3.528s (starting election)
I0604 13:04:54.641409 110868 raft_consensus.cc:434] T 2919bf85fc5041c7bab2aff27828cf0d P 1b1a918b30e84678a85a5dc5a25b9915 [term 4 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f40241678ffd4ebaa8dcc39c58c1c0a7" member_type: VOTER last_known_addr { host: "ip-10-197-22-10.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "f443865dd251425691c7721fccf0c9f0" member_type: VOTER last_known_addr { host: "ip-10-197-18-181.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.643842 109755 raft_consensus.cc:412] T 1df121ea1dcb4430ba16b370c232d72f P 1b1a918b30e84678a85a5dc5a25b9915 [term 10 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.643915 109996 raft_consensus.cc:412] T 593b4b0f349f4f08a8d3414936cc8417 P 1b1a918b30e84678a85a5dc5a25b9915 [term 6 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.643993 110868 leader_election.cc:221] T 2919bf85fc5041c7bab2aff27828cf0d P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 5 pre-election: Requesting pre-vote from peer f40241678ffd4ebaa8dcc39c58c1c0a7
I0604 13:04:54.644037 109755 raft_consensus.cc:2456] T 1df121ea1dcb4430ba16b370c232d72f P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 9.319s (starting election)
I0604 13:04:54.644079 109996 raft_consensus.cc:2456] T 593b4b0f349f4f08a8d3414936cc8417 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 1.980s (starting election)
I0604 13:04:54.644256 110868 leader_election.cc:221] T 2919bf85fc5041c7bab2aff27828cf0d P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 5 pre-election: Requesting pre-vote from peer f443865dd251425691c7721fccf0c9f0
I0604 13:04:54.644343 109996 raft_consensus.cc:434] T 593b4b0f349f4f08a8d3414936cc8417 P 1b1a918b30e84678a85a5dc5a25b9915 [term 6 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "1316b6fec47540bf9be2cc3997f4e720" member_type: VOTER last_known_addr { host: "ip-10-197-0-164.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "f40241678ffd4ebaa8dcc39c58c1c0a7" member_type: VOTER last_known_addr { host: "ip-10-197-22-10.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.644876 109755 raft_consensus.cc:434] T 1df121ea1dcb4430ba16b370c232d72f P 1b1a918b30e84678a85a5dc5a25b9915 [term 10 FOLLOWER]: Starting pre-election with config: opid_index: 514 OBSOLETE_local: false peers { permanent_uuid: "544fc91840724584a159b0cfb6cf7feb" member_type: VOTER last_known_addr { host: "ip-10-197-2-178.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "88df884001914dcb959fa336e4b28917" member_type: VOTER last_known_addr { host: "ip-10-197-12-156.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.645130 109975 raft_consensus.cc:412] T 171d67702f904599a38a6970ed5a7189 P 1b1a918b30e84678a85a5dc5a25b9915 [term 5 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.645442 109975 raft_consensus.cc:2456] T 171d67702f904599a38a6970ed5a7189 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 11.762s (starting election)
I0604 13:04:54.647053 109975 raft_consensus.cc:434] T 171d67702f904599a38a6970ed5a7189 P 1b1a918b30e84678a85a5dc5a25b9915 [term 5 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "162f55947f6f4e7abf3f64a1a77f1b36" member_type: VOTER last_known_addr { host: "ip-10-197-15-82.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "1b1a918b30e84678a85a5dc5a25b9915" member_type: VOTER last_known_addr { host: "ip-10-197-15-241.eu-west-1.compute.internal" port: 7050 } } peers { permanent_uuid: "f40241678ffd4ebaa8dcc39c58c1c0a7" member_type: VOTER last_known_addr { host: "ip-10-197-22-10.eu-west-1.compute.internal" port: 7050 } }
I0604 13:04:54.647620 110872 raft_consensus.cc:412] T 5496fff48b1542c58a62cfb5a051016b P 1b1a918b30e84678a85a5dc5a25b9915 [term 10 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.647794 109996 leader_election.cc:221] T 593b4b0f349f4f08a8d3414936cc8417 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 7 pre-election: Requesting pre-vote from peer 1316b6fec47540bf9be2cc3997f4e720
I0604 13:04:54.647964 110872 raft_consensus.cc:2456] T 5496fff48b1542c58a62cfb5a051016b P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 7.234s (starting election)
I0604 13:04:54.648689 109996 leader_election.cc:221] T 593b4b0f349f4f08a8d3414936cc8417 P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 7 pre-election: Requesting pre-vote from peer f40241678ffd4ebaa8dcc39c58c1c0a7
I0604 13:04:54.648741 109755 leader_election.cc:221] T 1df121ea1dcb4430ba16b370c232d72f P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 11 pre-election: Requesting pre-vote from peer 544fc91840724584a159b0cfb6cf7feb
I0604 13:04:54.648896 109755 leader_election.cc:221] T 1df121ea1dcb4430ba16b370c232d72f P 1b1a918b30e84678a85a5dc5a25b9915 [CANDIDATE]: Term 11 pre-election: Requesting pre-vote from peer 88df884001914dcb959fa336e4b28917
I0604 13:04:54.649121 110868 raft_consensus.cc:412] T 46b6526977b4451c8f39482a02e048f3 P 1b1a918b30e84678a85a5dc5a25b9915 [term 6 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I0604 13:04:54.649515 110868 raft_consensus.cc:2456] T 46b6526977b4451c8f39482a02e048f3 P 1b1a918b30e84678a85a5dc5a25b9915: Snoozing failure detection for 5.135s (starting election)
.... The master is reporting: I0604 13:34:21.479568 10232 catalog_manager.cc:3178] Sending DeleteTablet(TABLET_DATA_TOMBSTONED) for tablet 90b506caf7dc4346a3ddfb5bbbbdc336 on 1b1a918b30e84678a85a5dc5a25b9915 (ip-10-197-15-241.eu-west-1.compute.internal:7050) (TS 1b1a918b30e84678a85a5dc5a25b9915 not found in new config with opid_index 123)
W0604 13:34:21.479780 10232 catalog_manager.cc:2920] TS 1b1a918b30e84678a85a5dc5a25b9915 (ip-10-197-15-241.eu-west-1.compute.internal:7050): Delete Tablet RPC failed for tablet 90b506caf7dc4346a3ddfb5bbbbdc336: Network error: Client connection negotiation failed: client connection to 10.197.15.241:7050: connect: Connection refused (error 111)
I0604 13:34:21.479797 10232 catalog_manager.cc:2956] Scheduling retry of 90b506caf7dc4346a3ddfb5bbbbdc336 Delete Tablet RPC for TS=1b1a918b30e84678a85a5dc5a25b9915 with a delay of 60008 ms (attempt = 23) It looks to me that there is apparently a problem with undo/redo logs on those tablet servers. From CM I can see that the other servers probably increased all the replicas, because the total tablet size in TBs decreased, but is back on the original volume, but some of the tables are not available and are in Unhealthy state. How can I bring back those tablet servers into the cluster? Thanks
... View more
Labels:
- Labels:
-
Apache Kudu
05-30-2018
08:50 AM
Hi, is it possible to calculate statistics just for a given value in range partition of a Kudu table? Thanks
... View more
Labels:
- Labels:
-
Apache Kudu
01-31-2018
06:31 AM
Found out that if multiple spark tasks are reading the same tablet (partition) then it counts multiple times the reads. Therefore the total cells read could be much higher than the number of rows in tablet, actual # of tasks x # rows.
... View more
01-31-2018
06:30 AM
Found out that if multiple spark tasks are reading the same tablet (partition) then it counts multiple times the reads. Therefore the total cells read could be much higher than the number of rows in tablet, actual # of tasks x # rows.
... View more
01-29-2018
10:47 AM
Hi, I thought that during a particular scan Kudu is reporting a number of rows reand in realt-time per each column. At least on small table it was equal to roughly the number of rows in the partition. But now I am scanning a 1 billion (1 000 000 000) row table, the table is partitioned into multiple partitions. And the cells read shows 2.3billion 4.6 billion etc. Can somebody explain why those numbers are so high?
... View more
Labels:
- Labels:
-
Apache Kudu