Member since
11-03-2023
1
Post
0
Kudos Received
0
Solutions
11-04-2023
07:56 AM
Dear community gods, I am using CDH6.3.1 and I am exporting data from KUDU and it has an error as follows. I need your help now: [root@hadoop001 impalad]# sudo -u impala impala-shell -i hadoop001 -d test --query_option="SPOOL_QUERY_RESULTS=TRUE" -l -u user01 --auth_creds_ok_in_clear -q "select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1" Starting Impala Shell using LDAP-based authentication LDAP password for user01: Opened TCP connection to hadoop001:21000 Connected to hadoop001:21000 Server version: impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac) SPOOL_QUERY_RESULTS is not supported for the impalad being connected to, ignoring. Query: use `test` Query: use `test` Query: select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1 Query submitted at: 2023-11-03 21:59:04 (Coordinator: http://hadoop001:25000) Query progress can be monitored at: http://hadoop001:25000/query_plan?query_id=ed46b03a61e6b6a6:558e158800000000 ERROR: Unable to advance iterator for node with id '0' for Kudu table 'test.test_table01': Network error: RPC frame had a length of 53275184, but we only support messages up to 52428800 bytes long. Could not execute command: select * from test.test_table01 where length(file_data_json)/1024/1024 > 20 limit 1 [root@hadoop001 impalad]# It should be noted that my KUDU in the "gflagfile Kudu Service Advanced Configuration code snippet (Safety valve)" has a custom configuration as follows: --num_tablets_to_open_simultaneously=8 --num_tablets_to_delete_simultaneously=8 --rpc_service_queue_length=1000 --raft_heartbeat_interval_ms=1000 --tablet_transaction_memory_limit_mb=128 --unlock_unsafe_flags=true --max_cell_size_bytes=209715200 --rpc_max_message_size=134217728
... View more
Labels: