Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Kudu T-Server Re-balancing

Solved Go to solution
Highlighted

Kudu T-Server Re-balancing

Explorer

Hello,

 

I would like to know if there a way to rebalance data in Kudu evenly across all kudu t-servers. Our Kudu deployment is as follows:

3 Kudu Masters

9 Tablet Servers

kudu 1.7.0-cdh5.16.2/ CM 5.16.2

 

Data across these 9 T-Servers is not evenly distributed, out these 9 t-servers, I see data is more stored on 3 t-servers and not distributed evenly. I was going through some articles and found that currently there is no rebalance tool like HDFS (https://community.cloudera.com/t5/Support-Questions/Kudu-Tablet-Server-Data-Directories-rebalancing/...),

 

However if we Go to Clusters > Kudu > Click Actions I see Run Kudu Rebalancer Tool, would like to know, what is the purpose of this. Will this distribute data for overall Kudu or just Kudu Master or Kudu T-Servers too. Request some advice  / assistance on the same.

 

Thanks

Amn

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Kudu T-Server Re-balancing

Expert Contributor

Hi @Amn_468 ,

 

I'm not sure what is causing this issue. Do you have an enterprise support agreement with Cloudera and able to open a technical support case with us?

 

Regards,

Steve

View solution in original post

4 REPLIES 4
Highlighted

Re: Kudu T-Server Re-balancing

Explorer

Any help on this one ?

Highlighted

Re: Kudu T-Server Re-balancing

Expert Contributor

Hi @Amn_468 ,

 

Kudu masters don't serve or store data, so the rebalance tool will rebalance data across all of the tablet servers where the data are stored. You don't need any downtime to run the Kudu cluster rebalance tool.

 

Regards,

Steve

Highlighted

Re: Kudu T-Server Re-balancing

Explorer

Hi @StevenOD 

 

I tried to run re-balance tool but I get below error, 

Failed RPC negotiation. Trace:
0325 20:44:12.092074 (+     0us) reactor.cc:577] Submitting negotiation task for server connection from XXX.XX.XXX.XXX:52183
0325 20:44:12.092167 (+    93us) server_negotiation.cc:176] Beginning negotiation
0325 20:44:12.092170 (+     3us) server_negotiation.cc:365] Waiting for connection header
0325 20:44:12.096890 (+  4720us) server_negotiation.cc:373] Connection header received
0325 20:44:12.098104 (+  1214us) server_negotiation.cc:329] Received NEGOTIATE NegotiatePB request
0325 20:44:12.098105 (+     1us) server_negotiation.cc:412] Received NEGOTIATE request from client
0325 20:44:12.098128 (+    23us) server_negotiation.cc:341] Sending NEGOTIATE NegotiatePB response
0325 20:44:12.098177 (+    49us) server_negotiation.cc:197] Negotiated authn=SASL
0325 20:44:12.104531 (+  6354us) server_negotiation.cc:329] Received TLS_HANDSHAKE NegotiatePB request
0325 20:44:12.106114 (+  1583us) server_negotiation.cc:341] Sending TLS_HANDSHAKE NegotiatePB response
0325 20:44:12.115849 (+  9735us) server_negotiation.cc:329] Received TLS_HANDSHAKE NegotiatePB request
0325 20:44:12.116299 (+   450us) server_negotiation.cc:341] Sending TLS_HANDSHAKE NegotiatePB response
0325 20:44:12.116346 (+    47us) server_negotiation.cc:581] Negotiated TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384
0325 20:44:12.123359 (+  7013us) negotiation.cc:304] Negotiation complete: Network error: Server connection negotiation failed: server connection from XXX.XX.XXX.XXX:52183: BlockingRecv error: failed to read from TLS socket: Cannot send after transport endpoint shutdown (error 108)
Metrics: {"server-negotiator.queue_time_us":53}

 

Thanks

Amn

Highlighted

Re: Kudu T-Server Re-balancing

Expert Contributor

Hi @Amn_468 ,

 

I'm not sure what is causing this issue. Do you have an enterprise support agreement with Cloudera and able to open a technical support case with us?

 

Regards,

Steve

View solution in original post

Don't have an account?
Coming from Hortonworks? Activate your account here