Reply
New Contributor
Posts: 1
Registered: ‎09-26-2016

Impala queries take longer time with Kudu DMLs

[ Edited ]

basic queries like load and delete take unusual amount of time even with few MBs of data

Below text is from an insert job profile

 

 

Averaged Fragment F00:(Total: 34m10s, non‐child: 0.000ns, % non‐child: 0.00%)
split sizes: min: 83.68 MB, max: 128.00 MB, avg: 105.87 MB, stddev: 22.13 MB
‐ AverageThreadTokens: 1.96
‐ BloomFilterBytes: 0
‐ PeakMemoryUsage: 37.13 MB (38932480)
‐ PerHostPeakMemUsage: 37.13 MB (38932480)
‐ PrepareTime: 71.607ms
‐ RowsProduced: 1.81M (1810872)
‐ TotalCpuTime: 1h6m
‐ TotalNetworkReceiveTime: 0.000ns
‐ TotalNetworkSendTime: 0.000ns
‐ TotalStorageWaitTime: 22.884ms

KuduTableSink:(Total: 49m56s, non‐child: 49m56s, % non‐child: 100.00%)
KuduFlushTimer: 49m45s
‐ RowsWritten: 2.38M (2375759)
‐ RowsWrittenRate: 792.00 /sec
‐ TotalKuduFlushErrors: 0 (0)
‐ TotalKuduFlushOperations: 2.32K (2323)

 

Logs:

Impalad:

E0130 09:55:35.430891 31822 logging.cc:121] stderr will be logged to this file. E0131 01:53:55.500494 32069 AsyncKuduClient.java:1684] Some clients are left in the client cache and haven't been cleaned up: {hostname4:7050=TabletClient@1114374271(chan=null, uuid=56df15bb49074a5185c57c74ecd68d86, #pending_rpcs=0, #rpcs_inflight=0), hostname3:7050=TabletClient@2028750934(chan=null, uuid=8b98f05f4f75498780d208f65c7770fb, #pending_rpcs=0, #rpcs_inflight=0), hostname5:7050=TabletClient@1619084319(chan=null, uuid=3e934fbaf23045528fecb78ad75ded2b, #pending_rpcs=0, #rpcs_inflight=0)}

 

Kudu tserver:

W0130 08:32:56.348616 31653 leader_election.cc:333] T c204bf0407d64f19b4d76046c702cc13 P 3e934fbaf23045528fecb78ad75ded2b [CANDIDATE]: Term 64 election: Vote denied by peer 56df15bb49074a5185c57c74ecd68d86 with higher term.

Message: Invalid argument: T c204bf0407d64f19b4d76046c702cc13 P 56df15bb49074a5185c57c74ecd68d86 [term 65 FOLLOWER]: Leader election vote request: Denying vote to candidate 3e934fbaf23045528fecb78ad75ded2b for earlier term 64. Current term is 65. W0130 08:32:52.252187 31651 consensus_peers.cc:332] T a908f5ed03ad4a5d9ef943a3e1f6f4fa P 3e934fbaf23045528fecb78ad75ded2b -> Peer 8b98f05f4f75498780d208f65c7770fb (hostname3:7050): Couldn't send request to peer 8b98f05f4f75498780d208f65c7770fb for tablet a908f5ed03ad4a5d9ef943a3e1f6f4fa.

Status: Remote error: Service unavailable: UpdateConsensus request on kudu.consensus.ConsensusService from hostname5:49325 dropped due to backpressure. The service queue is full; it has 50 items.. Retrying in the next heartbeat period. Already tried 71 times. 0130 18:28:51.513813 (+ 19us) negotiation.cc:234] Negotiation complete: Network error: Server connection negotiation failed: server connection from hostname3:43690: BlockingRecv error: Recv() got EOF from remote (error 108) Metrics: {"negotiator.queue_time_us":323,"thread_start_us":256,"threads_started":1} W0130 19:12:42.301179 8620 negotiation.cc:241] Failed RPC negotiation. Trace: 0130 19:12:42.295279 (+ 0us) reactor.cc:361] Submitting negotiation task for server connection from hostname3:52396

 

This environment is migrated from manual installation to Cloudera Manager with parcels, performed well before the migration.

 

Environment:

Hardware: 24-core CPU, 180gb RAM with SSDs

CM - 5.9.1

CDH - 5.9.0

Impala-Kudu - 1.0.0

Kudu - 1.0.1

Announcements
Unanswered Topics
No posts to display.