Member since
05-03-2018
25
Posts
0
Kudos Received
0
Solutions
04-24-2019
08:51 AM
Could you advise if there is a solution to the problem, when Impala assigns heavy query parts to busy executors. For example the following was faced at CDH 5.16 with Impala 2.12.0: Impala has several (let's say 5) executors each having ~100GB RAM. Impala admission control is used. The mem_limit is set default (or about default ~80%), e.g. 80GB. The first relatively long and heavy query (let's name it Query1) comes and one of its steps take ~70GB RAM at executor1, i.e. there is ~10GB available RAM at this executor for reservation. Other 4 executor servers are nearly idle. At the same time the second query (let's name Query2) comes, which requires 40GB RAM, and it might happen the Query2 is assigned to the executor1, which is busy. So the Query2 fails due to it cannot allocate/reserve the memory. Is there a way to configure Impala to assign fragments/query parts to less busy executors? So far the concurrency reduction or reservation removal (since reserved memory amount usually is larger than really used) might work, but I see it too inefficient to use only 1-2 executors out of 5. Impala on YARN potentially might help, but as far as I see, it requires Llama, which is deprecated and is going to be removed soon.
... View more
04-05-2019
03:35 AM
Hi, I'm setting up Impala Admission Control. For the user.<username> placement rules, there is "Not recommended" remark "Use the resource pool that matches the username. (Not Recommended)" (https://www.cloudera.com/documentation/enterprise/5-16-x/topics/cm_mc_resource_pools.html) In my use-case, specific limitation for a set (a dozen) of users should be applied. At the same time the group management is relatively hard, therefore, I would prefer the root.[username] approach. I would like to better understand the drawbacks of this approach: are there any technical limitations from Impala point of view, or it is just a bad practice due to it is harder to maintain&support&manage (from Administrator's perspective)?
... View more
Labels:
- Labels:
-
Apache Impala
04-05-2019
02:07 AM
Yes, that is for limiting the query in order not to reduce accidental influence on other users (i.e. by occupying all available resources). One more point: impala may have default query memory limits set, so you may wish to overwrite it.
... View more
11-14-2018
07:17 AM
Yes, 'query1' always returned data, but the 'query2' always returned empty output. Did you mean by "now" - after some time, during which some inserts and timestamp move was done? Unfortunately, we've recreated the table after some time and I can't say if the writing helped. Thanks for the link to ticket KUDU-2463. 1. There were inserts previous evening - the table was populated. Since then unlikely there were changes. 2. I had a look into logs: all tservers were restarted a few hours before the select queries. So far there are more evidences for this Kudu-2463. If it appears again, I will try to write to the table. I've checked with developers, so summing up with your comments: previous day evening there was deletion (all records, but table remained) and insertion of data (via spark). Next morning tservers were restarted. After ~8h or more after insert has finished the inconsistency was found.
... View more
11-07-2018
12:56 AM
These queries were executed multiple times in order something like "query1", "query2", "query1", "query2",... . For about half an hour. Kudu cluster ksck was checked as well and it was healthy.
... View more
11-06-2018
06:48 AM
Today we've faced with the following situation: different Impala queries to the same table returned inconsistent results - one showed there are data, another showed their absence. There were no data modifications in between and the queries were executed several times shuffling the order. The table is stored as Kudu. The shown results are from impala-shell. Environment: CDH 5.15.0 Kudu 1.7.0-cdh5.15.0 (3 masters + 16 tservers) Impala-shell v2.12.0-cdh5.15.0 Query1. [node009.mydomain.net:21000] > select * from mydb1.table1 limit 20 ;
Query: select * from mydb1.table1 limit 20
Query submitted at: 2018-11-06 13:30:19 (Coordinator: http://node009:25000)
Query progress can be monitored at: http://node009:25000/query_plan?query_id=d84287045f155a50:251e230100000000
+---------+------------+---------------+----------+-----------+----------+
| myfield1| myfield2 | myfield3 | myfield4 | myfield5 | myfield6 |
+---------+------------+---------------+----------+-----------+----------+
| 19 | 0 | 1279900254208 | z0012 | 22 | M |
...
| 302 | 0 | 1194001234293 | c1236 | 3 | A |
+---------+------------+---------------+----------+-----------+----------+
Fetched 20 row(s) in 21.13s Inspecting tablets at Kudu with "kudu fs list" shows multiple rowsets with data. Query2. [node009.mydomain.net:21000] > select count(*) from mydb1.table1 ;
Query: select count(*) from mydb1.table1
Query submitted at: 2018-11-06 13:32:56 (Coordinator: http://node009:25000)
Query progress can be monitored at: http://node009:25000/query_plan?query_id=874eb1365115b065:5527220e00000000
+----------+
| count(*) |
+----------+
| 0 |
+----------+
Fetched 1 row(s) in 34.71s Might be worth to mention that query time is quite large as in case there are data.
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Kudu
09-20-2018
02:31 AM
mpercy, reviewing the cluster size is in progress, but it takes long time. As it usually happen with on-premises hardware 🙂 However, the tablet number most likely will be a bottleneck in future too, so any related performance improvement is worth. I see there are improvements with optimization for deletion in 1.7.1, so yes, might be worth to consider too. Raft consensus timeouts - probably may help to avoid avelanches, but it sounds to be rather as fighting vs consequences. Alexey1c, 1. /etc/hosts is in place again after the issue appeared. It is the first point to check in the nswitch.conf 2. THP were disabled initially according to Cloudera recommendations. 3. Tablets were rebalanced. Thanks for the link - time to replace custom script. Basically, reduction of tablets amount (but still above recommendations), rebalancing and population of /etc/hosts - these were the first action points and they helped to reduce the occurance significantly. But "slow dns lookup" and "couldn't get scanned data" still appear from time to time.
... View more
09-18-2018
03:08 AM
Yes, I understand that. Unfortunately, the usecase dictates conditions we're hitting the limits: 1) small amount of large servers -> not so large number of tablets available 2) several dozens of systems * hundred of tables each * 3-50 tablets per table * replication factor -> quite large number of tablets required May parameters tuning improve the situation with appearing backpressure, e.g. the default maintenance_manager_num_threads = 1 Does it make sense to change to 2-4? Maybe any other advises?
... View more
09-18-2018
03:01 AM
Great news, thanks! Since it is Kudu client, does it mean it will be available when a new version of Impala is released with fixed Kudu client?
... View more
09-03-2018
03:26 AM
We've reduced the amount of tablets to 4-4.5k per tserver and added populated /etc/hosts - the frequency has significantly dropped (previously it happened sometimes for 3 map attempts in a row and failed the job, now it happens rarely and handled by the second attempt). Application writes asynchronously, but shouldn't wait for such long. But I guess at OS level it still may be interrupted. I haven't seen the scanner ID in that tservers' logs. However, previously there were cases some time ago, when scanner ID wasn't found error appeared right after scanner creation and nearly after 60 seconds at one of tservers appeared this scanners timeout message. Regarding cluster check: During the backpressure issue the tservers may become inavailable in kudu (including "cluster ksck") and consensus is lost. After some time kudu returns to normal operational state. Are there any recommendations to reduce backpressure? Is it worth to increase stack from 50 items to larger? Or maybe any recommendations to tune kudu for larger tablet amount?
... View more
08-27-2018
08:54 AM
Hi, We're facing with the instability of Kudu. We run map-reduce jobs, where mappers read from Kudu, process data, pass to reducers and reducers write to Kudu. Sometimes mappers fail with "Exception running child : java.io.IOException: Couldn't get scan data", caused by "<tablet_id> pretends to not know KuduScanner" (see mapper.txt in the link below). It happens with multiple attempts as well, which resulted in job failure. The environment is: CDH 5.15.0 Kudu 1.7 3 masters 15 tservers. Here is a failure example, which happened at 2018-08-27 10:26:41. This time there was also a restart of one of tservers. At that time at kudu tablet servers are observed multiple requests with backpressure and consensus loss (see attached files from 3 nodes, where replicas were placed). The logs on other tablets were removed, in the logs there are some minutes before and after the failure. Mapper error - https://expirebox.com/download/5cce0d1c712565547c2f382aab99a630.html node07 - https://expirebox.com/download/9be42eeb88a367639e207d0c148e6e09.html node12 - https://expirebox.com/download/0e021bd7fd929b9bd585e4e995729994.html node13 - https://expirebox.com/download/db31c5ac0305f18b6ef0e2171e2d034c.html Kudu leader at that time - https://expirebox.com/download/f24cd185e2bb4889dbc18b87c70fc4c8.html Limitations are in shape, except for tablets per server - currently a few have ~5000 tablets per server, others less. There are powerful enough servers with reserved capacity, so looking at metrics there are no anomalies/peaks in CPU, RAM, disk I/O, network. Side note: from time to time there appear "slow DNS" messages, where "real" may exceed limits (5s), but "user" and "system" are in good shape. Some time ago there were attempts to lookup DNS locally, but without positive effect. Still I don't expect this to be the root cause. Any suggestions how to tune the configuration is welcome as well. IP - Hostname - Role - tserver ID
10.250.250.11 - nodemaster01 - Kudu Master001
10.250.250.12 - nodemaster02 - Kudu Master002
10.250.250.13 - node01 - Kudu Master003 + Kudu Tablet Server
10.250.250.14 - node02 - Kudu Tablet Server
...
10.250.250.19 - node07 - Kudu Tablet Server - e8a1d4cacb7f4f219fd250704004d258
...
10.250.250.24 - node12 - Kudu Tablet Server - 3a5ee46ab1284f4e9d4cdfe5d0b7f7fa
10.250.250.25 - node13 - Kudu Tablet Server - 9c9e4296811a47e4b1709d93772ae20b
...
... View more
Labels:
- Labels:
-
Apache Kudu
08-06-2018
05:49 AM
Unfortunately, the logrotation policies have just removed the logs for the case in the first post. Are there any ideas/suggestions based on difference in table types - external and not? Both were based on Kudu, but external continued to work properly.
... View more
08-03-2018
02:59 AM
Thanks for fast replies and noting about version - forgot to mention it: kudu 1.6.0-cdh5.14.4 Impala Shell v2.11.0-cdh5.14.4 mpercy, 1. Here is impala-shell (I have saved a piece of failed queries with a timestamp, so I'll try to provide details basing on it): [<coord+executor_dns>>:21000] > select * from enabled_vars where pid="3020";
Query: select * from enabled_vars where pid="3020"
Query submitted at: 2018-08-01 13:47:35 (Coordinator: http://<coord+executor_dns>>:25000)
Query progress can be monitored at: http://<coord+executor_dns>>:25000/query_plan?query_id=2a43f7388ed44c8d:92ff202f00000000
WARNINGS: Unable to open Kudu table: Timed out: GetTableSchema timed out after deadline expired
[<coord+executor_dns>>:21000] > invalidate metadata;
Query: invalidate metadata
Query submitted at: 2018-08-01 13:48:45 (Coordinator: http://<coord+executor_dns>>:25000)
Query progress can be monitored at: http://<coord+executor_dns>>:25000/query_plan?query_id=e34f104534f46bd9:a22d22e200000000
Fetched 0 row(s) in 42.06s 2. Nothing much in catalog daemon log at that time: I0801 13:46:19.166558 58677 catalog-server.cc:241] Catalog Version: 82226 Last Catalog Version: 82226
I0801 13:48:42.336992 47009 HdfsTable.java:1197] Incrementally loading table metadata for: <some_table> Alexey1c, 1. 3 kudu masters 2. yes, all 3 show create table enabled_vars;
Query: show create table enabled_vars
+------------------------------------------------------------------------------------------------------------------------------------------------+
| result |
+------------------------------------------------------------------------------------------------------------------------------------------------+
| CREATE TABLE <desired_database>.enabled_vars ( |
| pid STRING NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, |
| var_code STRING NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, |
| PRIMARY KEY (pid, var_code) |
| ) |
| PARTITION BY HASH (pid, var_code) PARTITIONS 2 |
| STORED AS KUDU |
| TBLPROPERTIES ('STATS_GENERATED_VIA_STATS_TASK'='true', 'kudu.master_addresses'='<master1>,<master2>,<master3>') |
+------------------------------------------------------------------------------------------------------------------------------------------------+
Fetched 1 row(s) in 0.07s 3. Yes, it is met at the failing impala-daemon, but only more than a day before issue (see timestamp): I0731 09:13:59.293942 4640 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.293975 4637 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294001 4625 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294025 4632 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294055 4622 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294075 4643 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294252 4628 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294342 4648 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.294668 4633 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.296530 4652 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.304276 4661 client-internal.cc:233] Reconnecting to the cluster for a new authn token
I0731 09:13:59.394750 29050 impala-internal-service.cc:44] ExecQueryFInstances(): query_id=464265b6715c2634:cdc2ab0500000000
I0731 09:13:59.395293 29050 query-exec-mgr.cc:46] StartQueryFInstances() query_id=464265b6715c2634:cdc2ab0500000000 coord=<another_coordinator_node>:22000
I0731 09:13:59.395375 29050 query-state.cc:173] Buffer pool limit for 464265b6715c2634:cdc2ab0500000000: 68719476736
I0731 09:13:59.396725 29050 initial-reservations.cc:60] Successfully claimed initial reservations (0) for query 464265b6715c2634:cdc2ab050
0000000 Do not see any errors closely after it. 4. No, can't find at the failing impala-daemon
... View more
08-01-2018
06:24 AM
The issue appeared again: this time it was at nodes, which are both "coordinator only" and "coordinator+executor". Coordinator+executor didn't have LDAP authentication. Recent changes before the issue was noticed: there was added one new impala-coordinator-only instance. One more observation: impala external tables (on top of Kudu) continue working fine impala tables stored as Kudu were failing with the warning (in impala CLI): WARNINGS: Unable to open Kudu table: Timed out: GetTableSchema timed out after deadline expired Kudu-master log file: negotiation.cc:320] Unauthorized connection attempt: Server connection negotiation failed: server connection from <impala-coord-executor-IP-address>:48648: authentication token expired
... Impala-coord+executor log file: I0801 13:46:10.203364 39650 coordinator.cc:370] started execution on 2 backends for query 21448fac89ef4281:3bffcd4800000000
I0801 13:46:10.238082 39926 query-exec-mgr.cc:149] ReleaseQueryState(): query_id=21448fac89ef4281:3bffcd4800000000 refcnt=4
I0801 13:46:10.310528 39928 client-internal.cc:281] Determining the new leader Master and retrying...
I0801 13:46:10.347209 39928 client-internal.cc:283] Unable to determine the new leader Master: Not authorized: Client connection negotiation failed: client connection to <kudu-leader-IP>:7051: FATAL_INVALID_AUTHENTICATION_TOKEN: Not authorized: authentication token expired
I0801 13:46:10.374269 39928 client-internal.cc:283] Unable to determine the new leader Master: Not authorized: Client connection negotiation failed: client connection to <kudu-leader-IP>:7051: FATAL_INVALID_AUTHENTICATION_TOKEN: Not authorized: authentication token expired Restart of a single impala coordinator+executor instance (where the issue happened) didn't help. Restart of all impala daemon instances resolved the issue.
... View more
07-19-2018
11:56 PM
Oh, the issue appeared to be even in functions. Thanks for fast reply and raising a ticket.
... View more
07-19-2018
08:07 AM
Hi, We're strugling with the issue that Impala does not provide access to SHOW CREATE VIEW statement for the owner of the view (as well as owner of underlying table). Sentry based authorization is used. The documentation (https://www.cloudera.com/documentation/enterprise/5-14-x/topics/impala_show.html#show_create_view) states that the required privileges should be: VIEW_METADATA privilege on the view and SELECT privilege on all underlying views and tables. In our case the user owns the view and table, therefore, I expect both are fulfilled. As you could see in the log below, the user has created, selected and dropped the view, but he couldn't see the CREATE statement. Invalidate metadata was tried too. Could you kindly help to resolve the issue, so that developers could check the CREATE statements - is there a missing bit or is it a bug? Environment: CDH 5.14.2 Impala 2.11.0 LDAP authentication Sentry file authorization Here is the log from different aspects: === Sentry file ======== [users] svc.analyticaldata_dq=analytical_data, ... ... [groups] analytical_data=analytical_data ... [roles] analytical_data=server=server1->db=analytical_data ... === Impala CLI ============= [node009:21000] > select version(); Query: select version() +-------------------------------------------------------------------------------------------+ | version() | +-------------------------------------------------------------------------------------------+ | impalad version 2.11.0-cdh5.14.2 RELEASE (build ed85dce709da9557aeb28be89e8044947708876c) | | Built on Tue Mar 27 13:39:48 PDT 2018 | +-------------------------------------------------------------------------------------------+ [node009:21000] > select user(); Query: select user() Query submitted at: 2018-07-19 15:30:16 (Coordinator: http://node009:25000) Query progress can be monitored at: http://node009:25000/query_plan?query_id=1e4cc7a8258b79ff:e58adb9100000000 +-----------------------+ | user() | +-----------------------+ | svc.analyticaldata_dq | +-----------------------+ Fetched 1 row(s) in 0.08s [node009:21000] > use analytical_data; Query: use analytical_data [node009:21000] > create view t as select count(*) from system9999.cases; Query: create view t as select count(*) from system9999.cases Query submitted at: 2018-07-19 15:24:52 (Coordinator: http://node009:25000) Query progress can be monitored at: http://node009:25000/query_plan?query_id=304454e5a834396a:c1fbf50a00000000 Fetched 0 row(s) in 0.08s [node009:21000] > select * from t; Query: select * from t Query submitted at: 2018-07-19 15:24:55 (Coordinator: http://node009:25000) Query progress can be monitored at: http://node009:25000/query_plan?query_id=27459f84b4308766:6ed0235200000000 +---------+ | _c0 | +---------+ | 6609331 | +---------+ Fetched 1 row(s) in 4.50s [node009:21000] > show create view t; Query: show create view t ERROR: AuthorizationException: User 'svc.analyticaldata_dq' does not have privileges to see the definition of view 'analytical_data.t'. [node009:21000] > drop view t; Query: drop view t === Metastore ============= [metastore]> select TBL_ID,TBL_NAME,OWNER,TBL_TYPE from TBLS where DB_ID=374406; +---------+--------------------------------------------+-----------------------+---------------+ | TBL_ID | TBL_NAME | OWNER | TBL_TYPE | +---------+--------------------------------------------+-----------------------+---------------+ | 1222804 | t | svc.analyticaldata_dq | VIRTUAL_VIEW |
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Sentry
05-11-2018
03:05 AM
mpercy, t hanks for the link. Currently the number of tablets per server at some tservers is exceeding recommended up to 10 times. Others criteria are fine. Does it mean the default limit number of file handlers should be increased 10 times, e.g. from 32k to 320k?
... View more
05-11-2018
01:39 AM
"-1" is an option, of course, but it would be good to have better understanding is it normal behaviour, why does it grow and how large might it raise. As well as having some limits (in the case of sudden grow) would be useful to limit the effect on other cluster services. Hash Partitions - majority 50, a couple tables - 128. Range partitioning unbounded. Encoding attribute is not used in table creation (auto_encoding). Block size is not used in table creation (default is taken)
... View more
05-09-2018
12:50 AM
The cluster was stopped during upgrade (from 5.13.0), so impala (as well as other services) is expected to be started with new configs since then. From cluster inspect the only thing remained running from 5.13 is supervisord process. I'll share more details if it appears again.
... View more
05-08-2018
12:17 AM
Answering your questions: 1) kudu -version kudu 1.6.0-cdh5.14.2 However, not sure if Kudu client is used in communication between Impala and Kudu. 2) Impala coordinators use LDAP authentication, impala executors - don't. Kerberos is not used. Moreover, the new session is initiated and the same issue appears (this happens till Impala is restarted). The sequence was: Normally the cluster works, Impala-users connect and execute queries through coordinator. There was Kudu leader change from one master to another (possibly also masterA->masterC->masterA). The new Impala-user runs a query and receives this issue. Impala restart has removed the effect and cluster returned to usual operating mode.
... View more
05-07-2018
07:20 AM
This property is left default, i.e. --block_manager_max_open_files=-1 The properties are left default, except for 11: - 4 related to dirs (fs_wal_dirs and fs_data_dirs for master and tablet servers) - Kudu Tablet Server Hard Memory Limit = 20G - Kudu Tablet Server Block Cache Capacity = 5G - Automatically Restart Process - Process Swap Memory Thresholds - Maximum Process File Descriptors = 65k - Cgroup CPU Shares=5000 - Cgroup I/O Weight=512
... View more
05-07-2018
06:33 AM
3 Masters 7 Tablet Servers (plans are to double the number of nodes in the nearest future) Kudu comes from CDH: kudu 1.6.0-cdh5.14.2
... View more
05-07-2018
02:57 AM
Hi, We're using CDH 5.12.4. Kudu user's nofile limit was set initially to 32k. The values were normal for some time, but gradually it raises up. On tables creation the file descriptors significantly raises up, reaching the critical values and causing Kudu instability/failures. The " Maximum Process File Descriptors" property was raised up twice and currently it seems enough with minimal load for now. However, later on we're planning to introduce much more load and therefore are very interested in recommended values. Could you share what would be the recommendations for the file descriptors limits for Kudu, e.g. "magic" formulas depending on the load/size/number of nodes.
... View more
Labels:
- Labels:
-
Apache Kudu
05-03-2018
01:02 AM
Hi, The issue (see log below) causes to be unable to launch impala query on Kudu tables through coordinator (but launch through executors work fine). Appeared after Kudu (masters and tservers) was restarted. CDH 5.14.2. LDAP authentication is enabled for impala coordinator nodes only. Let me know if any additional information is needed. Any ideas how to avoid the issue? Message in impalad log at impala coordinator daemon: Tuple(id=0 size=49 slots=[Slot(id=0 type=STRING col_path=[] offset=0 null=(offset=48 mask=1) slot_idx=0 field_idx=-1), Slot(id=1 type=STRING col_path=[] offset=16 null=(offset=48 mask=2) slot_idx=1 field_idx=-1), Slot(id=2 type=STRING col_path=[] offset=32 null=(offset=48 mask=4) slot_idx=2 field_idx=-1)] tuple_path=[])
I0503 09:29:01.486366 41053 coordinator.cc:370] started execution on 1 backends for query 964f6d68c729a5cd:2ad2ba7200000000
I0503 09:29:01.486568 42404 query-state.cc:377] Executing instance. instance_id=964f6d68c729a5cd:2ad2ba7200000000 fragment_idx=0 per_fragment_instance_idx=0 coord_state_idx=0 #in-flight=2
I0503 09:29:01.487128 42403 query-exec-mgr.cc:149] ReleaseQueryState(): query_id=964f6d68c729a5cd:2ad2ba7200000000 refcnt=3
I0503 09:29:01.487524 41053 impala-hs2-server.cc:492] ExecuteStatement(): return_val=TExecuteStatementResp {
01: status (struct) = TStatus {
01: statusCode (i32) = 0,
},
02: operationHandle (struct) = TOperationHandle {
01: operationId (struct) = THandleIdentifier {
01: guid (string) = "<some value>",
02: secret (string) = "<some value>",
},
02: operationType (i32) = 0,
03: hasResultSet (bool) = false,
},
}
I0503 09:29:01.487515 42406 coordinator.cc:789] Coordinator waiting for backends to finish, 1 remaining
I0503 09:29:01.491019 42404 client-internal.cc:281] Determining the new leader Master and retrying...
I0503 09:29:01.507668 42404 client-internal.cc:283] Unable to determine the new leader Master: Not authorized: Client connection negotiation failed: client connection to <Kudu_leader_IP>:7051: FATAL_INVALID_AUTHENTICATION_TOKEN: Not authorized: authentication token expired
I0503 09:29:01.528237 42404 client-internal.cc:283] Unable to determine the new leader Master: Not authorized: Client connection negotiation failed: client connection to <Kudu_leader_IP>:7051: FATAL_INVALID_AUTHENTICATION_TOKEN: Not authorized: authentication token expired
...
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Kudu