Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive Atlas Hook - Kafka Producer properties

avatar
Rising Star

Hi,

I have setup two clusters (1.LLAP Cluster,2.Data Governance Cluster), i have enabled Atlas hook in the hive service available in LLAP Cluster where as Atlas, Kafka has been setup in Data Governance cluster. Both the cluster have Kerberos setup for authentication. There is an issue with Hive Atlas hook bridge which fails to create a producer to the Kafka broker on the Data Governance Cluster. I suspect the issue is because of the incorrect Producer properties during the producer creation. I have provided the needed configurations in the atlas-applcation.properties file through ambari, but still some of the configurations are not reflecting during kafka producer creation for example security.protocol property of the producer (security.inter.broker.protocol in kafka service set to PLAINTEXTSASL while the security.protocol property used by the hive atlas hook uses PLAINTEXT). I have also tried modifying the producer.properties file avaiable under /etc/kafka/2.6.5.0-292/0/ path. Still there is an issue with producer creation. Getting below error:

2018-08-24 10:21:10,441 DEBUG [kafka-producer-network-thread | producer-1]: network.Selector (LogContext.java:debug(189)) - [Producer clientId=producer-1] Connection with dgpoc-m2.xyz.local/10.4.0.18 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:538) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:482) at org.apache.kafka.common.network.Selector.poll(Selector.java:412) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) at java.lang.Thread.run(Thread.java:748)

Below are the kafka producer properties used by the hive atlas hook:


2018-08-24 10:21:10,090 INFO [HiveServer2-Background-Pool: Thread-144]: producer.ProducerConfig (AbstractConfig.java:logAll(223)) - ProducerConfig values: acks = 1 batch.size = 16384 bootstrap.servers = [dgpoc-m2.xyz.local:6667] buffer.memory = 33554432 client.id = compression.type = none connections.max.idle.ms = 540000 enable.idempotence = false interceptor.classes = null key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer

When referring to the below hortonworks doc link, i got to know the difference in security protocol and the error caused was similar to the one showed in the link.

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-ka...

Any help will be much appreciated!

Thanksm

Cibi

1 ACCEPTED SOLUTION

avatar
Rising Star

@Cibi Chakaravarthi , Could you please set the atlas.kafka.security.protocol in atlas configs as PLAINTEXTSASL and see if it helps.

View solution in original post

3 REPLIES 3

avatar
Rising Star

@Cibi Chakaravarthi , Could you please set the atlas.kafka.security.protocol in atlas configs as PLAINTEXTSASL and see if it helps.

avatar
Rising Star

Hi,

@Ronak bansal, Thanks for the suggestion. The property atlas.kafka.security.protocol is already set in Atlas and the issue was because of a typo in atlas.kafka.security.protocol configured at the Hive service end.

Thanks,

Cibi

avatar
Rising Star

@Cibi Chakaravarthi thanks for an update.