Member since
04-25-2017
17
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2003 | 04-29-2019 09:42 AM | |
2014 | 04-29-2019 08:32 AM | |
2047 | 03-29-2019 01:48 PM | |
6910 | 10-18-2018 11:05 AM |
05-03-2019
08:32 AM
You can connect to multiple masters by providing a list of master addresses.
... View more
04-29-2019
09:42 AM
For now. I think the two possible methods I outlined might work. Additionally, you could export the data to something like parquet or avro, using Spark or Impala, and then reload the data in the new cluster.
... View more
04-29-2019
08:32 AM
Kudu doesn't support swapping a drive to a new host. Kudu tablet servers store the consensus configuration of their tablets, and the master also stores the consensus configuration for the tablets. By moving all the servers, you changed all the hostnames, and now the cluster is in total disarray. It's possible to rewrite the consensus configuration of the tablets on the tablet servers, but I'm not sure there's currently a way to rewrite the data in the master. So, by scripting `kudu local_replica cmeta rewrite_raft_config` you could fix the tablet servers. You will need to rewrite the config of each tablet so the hostnames are mapped from the old servers to the new servers. If you do that correcty and the tablet replicas are able to elect a leader, the leader will send updated information to the master, which should cause it to update its record. I don't think too many people have ever tried anything like this, so there may be other things that need to be fixed, or it simply might not be possible to recover the cluster. What you should have done is set up the new cluster, then transferred the data via Spark or an Impala CTAS statement, or you should have built the new cluster as an expansion of the existing one, and then decommissioned all the tablet servers of the old cluster, and then moved the master nodes one-by-one to the new cluster.
... View more
03-29-2019
01:48 PM
1 Kudo
This is a known issue with some code to auto-detect whether replicas of non-replicated tablets can be moved without issues (see KUDU-2443). The code relied on std::regex. The tool was built with g++/libstdc++ of versions < 4.9, which means std::regex unexpectedly fails to compile a regular expression containing a bracket, throwing a std::regex_error exception (see [1]). Starting from version 4.9.1, the libstdc++ has proper support for the C++11's regular expressions (see [2]). This makes the kudu CLI crash if running 'kudu cluster rebalance' on the following platforms: * RHEL/CentOS 7 * Ubuntu14.04 LTS (Trusty) * SLES12 You should be able to work around the problem by specifying the flag --move_single_replicas to either 'enabled' or 'disabled', as you require, instead of the default 'auto'. Unfortunately there's no release in the CDH 5 line in which this issue is fixed (yet).
... View more
03-18-2019
10:03 AM
1 Kudo
The memory is being used somewhere other than in places where the usage is tracked by the memtrackers. The first possibility that comes to mind is that it is being used by the block manager. Try running kudu fs check -fs_wal_dir=<wal dir> -fs_data_dirs=<data dirs> and posting the output here. You may need to run the command as the kudu user (usually 'kudu'). Another helpful thing would be to gather heap samples. See https://kudu.apache.org/docs/troubleshooting.html#heap_sampling. If you attach the resulting SVG file it would be very helpful in locating the memory usage.
... View more
01-28-2019
11:07 AM
Yes, KUDU-1400 is a reasonable explanation here. It's not exactly a bug, just a lack of a feature. In any case, the next release of Kudu has an enhanced compaction procedure that will handle this case.
... View more
01-25-2019
03:18 PM
One thing that is clearly happening here is that Kudu is sending much more data than is necessary back to Impala. You specified LIMIT 7, but Kudu doesn't support server-side limits until CDH6.1. For such a small query, this might make things quite a bit faster. Beyond that, I honestly don't see anything suspicious about the numbers from the Kudu side other than the scan took a long time given the amount of work involved. The trace shows nothing out of the ordinary; the metrics are fine. There weren't even cache misses, so everything came out of cache decoded + decompressed. From the Impala profile, the only suspicious-looking thing is that two round trips were required. That shouldn't have been the case with LIMIT 7 as the first batch should have had more than 7 records in it. If you run the scan a couple times in a row, does it get much faster? How does the time vary with the LIMIT amount?
... View more
01-23-2019
12:10 PM
You might be able to find a trace of the ScanRequest RPC in the /rpcz endpoint of the web UI. Try running the scan and, if you observe it is slow, looking at /rpcz for the matching trace. If that's too difficult, you can get all the traces dumped into the INFO log using the flag -rpc_dump_all_traces. Naturally this is very verbose on a busy server, but if you use the kudu tserver set_flag command you can turn the flag on and then turn it off a few seconds later, giving just enough time to capture the scan if you coordinate running the scan with turning on tracing. It's safe to change the flag at runtime in this way. If you get a trace, please copy paste it in a reply here. It will help us understand where time is being spent. You could also try gathering a trace of the tablet server when the slow scan runs. If you upload the trace file I can take a look at it. See https://kudu.apache.org/docs/troubleshooting.html#kudu_tracing for more information on how to gather traces.
... View more
10-18-2018
11:05 AM
1 Kudo
Hi Vincent. Sorry for the delay in responding. You might try running the fs check with the --repair option to see if it can fix the problems. Additionally, everything we've seen so far is consistent with the explanation that your tablet servers have a very large number of small data blocks, and this is responsible for the increased memory usage. It will also affect your scan performance- you can see it in the metrics, where there were 455 cfiles missed from cache (all of the blocks read) but only 400KB of data. Since each cfile (~a block) involves some fixed cost to read, this is slowing scans. I think the reason this happened is that your workload is slowly streaming writes in to Kudu-- I'm guessing inserts are roughly in order of increasing primary key? Unfortunately, there's no easy process to fix the state the table is in. Rewriting it (using a CTAS and rename, say) will make things better. In the future, upping the value of --flush_threshold_secs so it covers a long enough period so that blocks are a good size will help fix this problem. The tradeoff is the server will use some more disk space and memory for WALs. KUDU-1400 is the issue tracking the lack of a compaction policy to automatically deal with the situation you're in. It's being worked on right now.
... View more
10-05-2018
01:58 PM
Those are metrics for a write, not a scan. A scan RPC trace looks like { "method_name": "kudu.tserver.ScanRequestPB", "samples": [ { "header": { "call_id": 7, "remote_method": { "service_name": "kudu.tserver.TabletServerService", "method_name": "Scan" }, "timeout_millis": 29999 }, "trace": "1005 10:27:46.216542 (+ 0us) service_pool.cc:162] Inserting onto call queue\n1005 10:27:46.216573 (+ 31us) service_pool.cc:221] Handling call\n1005 10:27:46.216712 (+ 139us) tablet_service.cc:1796] Created scanner 9c3aaa87517f4832aa81ff0dc0d71284 for tablet 42483058124f48c685943bef52f3b625\n1005 10:27:46.216839 (+ 127us) tablet_service.cc:1872] Creating iterator\n1005 10:27:46.216874 (+ 35us) tablet_service.cc:2209] Waiting safe time to advance\n1005 10:27:46.216894 (+ 20us) tablet_service.cc:2217] Waiting for operations to commit\n1005 10:27:46.216917 (+ 23us) tablet_service.cc:2231] All operations in snapshot committed. Waited for 32 microseconds\n1005 10:27:46.216937 (+ 20us) tablet_service.cc:1902] Iterator created\n1005 10:27:46.217231 (+ 294us) tablet_service.cc:1916] Iterator init: OK\n1005 10:27:46.217250 (+ 19us) tablet_service.cc:1965] has_more: true\n1005 10:27:46.217258 (+ 8us) tablet_service.cc:1980] Continuing scan request\n1005 10:27:46.217291 (+ 33us) tablet_service.cc:2033] Found scanner 9c3aaa87517f4832aa81ff0dc0d71284 for tablet 42483058124f48c685943bef52f3b625\n1005 10:27:46.218143 (+ 852us) inbound_call.cc:162] Queueing success response\n", "duration_ms": 1, "metrics": [ { "key": "rowset_iterators", "value": 1 }, { "key": "threads_started", "value": 1 }, { "key": "thread_start_us", "value": 64 }, { "key": "compiler_manager_pool.run_cpu_time_us", "value": 117013 }, { "key": "compiler_manager_pool.run_wall_time_us", "value": 127378 }, { "key": "compiler_manager_pool.queue_time_us", "value": 114 } ] } ] }, The profile is pointing to this server having a lot of data blocks. What is your workload like? Does it involve a lot of updates and deletes? How many tablet replicas are on this server? Attaching the output of the following commands will help investigate further. All should be run on the tablet server where you're seeing the memory problem: sudo -u kudu kudu fs check --fs_wal_dir=<wal dir> --fs_data_dirs=<data dirs> This should be fine to run while the server is running. You'll see a benign error message about not being able to acquire a lock and proceeding in read only mode. sudo -u kudu kudu local_replica data_size <tablet id> --fs_wal_dir=<wal dir> --fs_data_dirs=<data dirs> Try running this for a few tablets of your most active tables.
... View more