Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

RPC frame had a length of 56685640, but we only support messages up to 52428800 bytes long

avatar
New Contributor

Dear community gods, I am using CDH6.3.1 and I am exporting data from KUDU and it has an error as follows. I need your help now:

[root@hadoop001 impalad]# sudo -u impala impala-shell -i hadoop001 -d test --query_option="SPOOL_QUERY_RESULTS=TRUE" -l -u user01
--auth_creds_ok_in_clear -q "select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1"
Starting Impala Shell using LDAP-based authentication
LDAP password for user01:
Opened TCP connection to hadoop001:21000
Connected to hadoop001:21000
Server version: impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac)
SPOOL_QUERY_RESULTS is not supported for the impalad being connected to, ignoring.
Query: use `test`
Query: use `test`
Query: select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1
Query submitted at: 2023-11-03 21:59:04 (Coordinator: http://hadoop001:25000)
Query progress can be monitored at: http://hadoop001:25000/query_plan?query_id=ed46b03a61e6b6a6:558e158800000000
ERROR: Unable to advance iterator for node with id '0' for Kudu table 'test.test_table01': Network error: RPC frame had a length of 53275184, but we only support messages up to 52428800 bytes long.
Could not execute command: select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1
[root@hadoop001 impalad]#

It should be noted that my KUDU in the "gflagfile Kudu Service Advanced Configuration code snippet (Safety valve)"
has a custom configuration as follows:
--num_tablets_to_open_simultaneously=8
--num_tablets_to_delete_simultaneously=8
--rpc_service_queue_length=1000
--raft_heartbeat_interval_ms=1000
--tablet_transaction_memory_limit_mb=128
--unlock_unsafe_flags=true
--max_cell_size_bytes=209715200
--rpc_max_message_size=134217728

3 REPLIES 3

avatar
Community Manager

@jizhizhang, Welcome to our community! To help you get the best possible answer, I have tagged in our CDH experts @willx @vaishaakb who may be able to assist you further.

Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Cloudera Employee

not sure how you run this issue, as you mentioned that you have set the "--rpc_max_message_size" to 128M (default is 50M), Can you check if you have restarted your Kudu Tservers after you changed to 128M?  Try to check this flag runtime value from Tserver UI flags page.

https://kudu.apache.org/docs/configuration_reference.html#kudu-master_rpc_max_message_size

avatar
Community Manager

@jizhizhang, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: