Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to read from a Kafka topic using Spark (streaming) in a secure Cluster?

avatar
Contributor

Specifications:

  • HDP 2.3.2
  • Kerberos enabled
  • Kafka topic exists and user <username> has read access
  • Kafka topic is readable/writable using the Kafka command line tools with specified user
  • We already have a Spark streaming application that works fine in an unsecure cluster reading from a Kafka topic.

What would be a working example of a Spark streaming job that reads input from Kafka in a secure cluster under above conditions?

We made a Spark streaming job work that reads/writes into secure HBase and thought it couldn't be that different to do it with Kafka.

1 ACCEPTED SOLUTION

avatar

spark streaming haven't yet enabled security for their kafka connector

View solution in original post

21 REPLIES 21

avatar
Super Collaborator

What is the issue you are facing after following the above documentation in HDP 2.4.2?

avatar
Contributor

If you want to test it, even on HDP 2.3.x (Spark 1.5.2) you can use the following library which has been compiled with the necessary changes in:

https://github.com/beto983/Streaming-Kafka-Spark-1.5.2/blob/master/spark-streaming-kafka_2.10-1.5.2....

avatar
Contributor

Do you have the code with the changes that you did in that jar?

avatar
Expert Contributor

Hi Stefan Kupstaitis-Dunkler,

We are using HDP-2.3.4.0 and use Kafka en SparkStreaming (Scala & Python) on a (Kerberos + Ranger) secured Cluster.

You need to add a jaas config location to the spark-sumbit command. We are using it in yarn-client mode. The kafka_client_jaas.conf file is send as a resource with the --files option and available in the yarn-container. We did not get ticket renewal working yet...

spark-submit (all your stuff) \
  --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.conf=kafka_client_jaas.conf" \
  --files "your_other_files,kafa_client_jaas.conf,serviceaccount.headless.keytab" \
  (rest of your stuff)

# --principal and --keytab does not work and conflict with --files keytab.
# The jaas file will be placed in the yarn-containers by Spark.
# The file contains the reference to the keytab-file and the principal for Kafka and ZooKeeper:

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useTicketCache=false
  useKeyTab=true
  principal="serviceaccount@DOMAIN.COM"
  keyTab="serviceaccount.headless.keytab"
  renewTicket=true
  storeKey=true
  serviceName="kafka";
};
Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useTicketCache=false
  useKeyTab=true
  principal="serviceaccount@DOMAIN.COM"
  keyTab="serviceaccount.headless.keytab"
  renewTicket=true
  storeKey=true
  serviceName="zookeeper";
}

If you need more info, feel free to ask.

Greetings,

Alexander

avatar
Contributor

I have tried this approach and doesn't work 😞 . Spark doesn't throw any error, but doesn't read anything from the kerberized Kafka.

avatar
Contributor

After a hard day our team solved the issue and we have created a little Spark application that can read from a kerberized Kafka and launching using yarn-cluster mode. Attached you have the pom.xml needed (very important use the Kafka 0.9 dependency) and the java code.

It is very important to use the old Kafka API and to use PLAINTEXTSASL (and not SASL_PLAINTEXT, a value coming from the new Kafka API) when configuring the Kafka Stream.

The jaas.conf file is the same that @Alexander Bij shared. Be careful because the keytab path is not a local path, is the keytab where the executor store the file (by default you only have to indicated the name of the keytab).

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useTicketCache=false
  useKeyTab=true
  principal="serviceaccount@DOMAIN.COM"
  keyTab="serviceaccount.headless.keytab"
  renewTicket=true
  storeKey=true
  serviceName="kafka";
};
Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useTicketCache=false
  useKeyTab=true
  principal="serviceaccount@DOMAIN.COM"
  keyTab="serviceaccount.headless.keytab"
  renewTicket=true
  storeKey=true
  serviceName="zookeeper";
};

Kafka will use this information to retrieve the kerberos configuration required when connecting to ZK and to the broker. In order to do that, the executor will require the keytab file to get the kerberos token before connecting to the broker. That is the reason of sending also the keytab to the container when launching the application. Please, notice that the extraJavaOptions refers to the container local path (the file will be placed in the root folder of the temp configuration) but the file path will require the local path where you have the jaas file in the server that is starting up the application.

spark-submit (all your stuff) \
  --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas.conf" \
  --files "your_other_files,kafka_client_jaas.conf,serviceaccount.headless.keytab" \
  (rest of your stuff)

Also verify that the parameter where you indicate the jaas file is java.security.auth.login.config.

And that's all, following above instruction and using the attached code you will be able to read from kerberized Kafka using Spark 1.5.2.

In our Kerberized environment we have Ambari 2.2.0.0, HDP 2.3.4.0 and Spark 1.5.2. I hope this information help you.

avatar
New Contributor

Hi javier,

Can you please upload the running code.

Inside pom.xml its giving below compilation error:

Project build error: Non-resolvable parent POM for com.accenture.ngap:spark-test:[unknown-version]: Failure to transfer com.accenture.ngap:parent:pom:0.0.1 from http://repo.hortonworks.com/content/repositories/releases/ was cached in the local repository, resolution will not be reattempted until the update interval of hortonworks has elapsed or updates are forced. Original error: Could not transfer artifact com.accenture.ngap:parent:pom:0.0.1 from/to hortonworks (http://repo.hortonworks.com/content/ repositories/releases/): connect timed out and 'parent.relativePath' points at wrong local POM

avatar
New Contributor

Are there examples for HDP 2.3.2 - Spark Streaming (1.4.1) and Kafka(0.8.2) with java and kerberos environment

avatar
Contributor

Did you try with the code that I shared on https://community.hortonworks.com/comments/30193/view.html some comments above?

avatar
New Contributor

HI Team ,

I am currently using Spark Streaming with Kafka without any SSL , it works fine .

But we need to enable SSL for Spark Streaming with Kafka.

Is this supported ?

I see spark 2.0 also supports kafka 0.8 version, but SSL is added in 0.9 kafka version .

Is there any way through which HDP is providing SSL enabled spark streaming with kafka ?

If yes . can any one share the document link or any info ?

Thanks