For Kafka, swap space is probably safe to clear (though I wouldn't), but you should avoid Kafka using swap space. If you look at disk IO on a Kafka broker node, it should be almost all writes, read should come from page cache. Kafka was designed to be the only tenant on a node and runs best that way. This is why you will find recommendations that say Kafka should not share nodes with Zookeeper or other Hadoop components. It is not always possible to dedicate machines to Kafka, so take a look at the disk IO when Kafka is running under normal load, if it is all writes, you can probably shrink the page cache a bit so you do less/no swapping. If there are lots of reads, you may need more memory or more nodes (unless you are deliberately and routinely reading topics from the beginning, in which case disk reads are unavoidable).
Can't help you with the zookeeper, I've never had reason to dig into zookeeper's internals, it has always just worked.