Member since
09-24-2016
11
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4586 | 03-09-2021 10:43 PM |
03-09-2021
10:43 PM
Summarising all of the above, assuming you need the spark - atlas hook in your system, the solution is as follows. You need to add the file atlas-application.properties and spark-sql-kafka-0-10_2.11-<version>.jar to the Oozie shared spark library. The library is located on HDFS at <home>/oozie/share/lib/lib_<date>/spark. The application properties file can be found in various places such as /etc/hive/conf.cloudera.hive/. The jar file can be found in the parcels directory in /opt/cloudera/parcels/CDH/jars/. When you copy the files they need to be owned by oozie.oozie and be world readable. If you make any changes to the shared library you must then restart the Oozie server before jobs can find the new files.
... View more
01-24-2021
01:33 AM
2 Kudos
I eventually found the answer in this document. https://docs.cloudera.com/runtime/7.2.6/kafka-securing/topics/kafka-secure-kerberos-enable.html The steps you need are 1: Create a jaas.conf file to describe how you will kerberise. Either interactively with kinit KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true;
}; or non-interactively with a keytab KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/mykafkaclient.keytab"
principal="mykafkaclient/clients.hostname.com@EXAMPLE.COM";
}; 2: Create a client properties file to describe how you will authenticate Either with TLS security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
ssl.truststore.location=<path to jks file>
ssl.truststore.password=<password for truststore> Or without security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka 3: Create the environment variable KAFKA_OPTS to to contain the JVM parameter export KAFKA_OPTS="-Djava.security.auth.login.config=<path to jaas.conf>" Then you can run the tool by referencing the Kafka brokers and the client config. BOOTSTRAP=<kafka brokers URL>
kafka-topics --bootstrap-server $BOOTSTRAP --command-config client.properties --list You will also need a Ranger policy that covers what you are trying to do.
... View more
09-24-2016
08:14 AM
Thanks. I followed the solution at Docker Installation On Fedora to install Docker's own yum repo. This allowed to me to upgrade to 1.12 and the clusterdock solution worked after that.
... View more