Member since
08-29-2016
30
Posts
15
Kudos Received
2
Solutions
08-03-2018
12:19 PM
5 Kudos
In kafka there are three types of communication : 1. Between brokers 2. Between client and broker. 3. Broker to zookeeper In order to communicate in kerberos enabled cluster one needs to authenticate itself. So when broker will try to communicate with other broker in the cluster it will need to authenticate first. Same for the clients communicating to the brokers. In Kafka, JAAS files is used for authentication. Lets first understand what's there in JAAS file. In kafka there are two JAAS files : 1. kafka_jaas.conf
2. kafka_client_jaas.conf.
Let's discuss about kafka_jaas.conf. This file will be used for authentication when a broker in a cluster tries to communicate with other brokers in cluster. Take a look at the content and understand its purpose : KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM";
}; KafkaServer section will be used by broker for authentication when it tries to communicate with other brokers in cluster. It should always be configured to use keytab and principal. Value of `serviceName` should be the principal as which kafka is running. `storeKey=true` : significance of setting the storekey parameter to true in jaas.conf Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM";
}; Client section in kafka_jaas.conf will be used for authentication when broker wants to communicate with zookeeper. It should always be configured to use keytab and principal. Value of `serviceName` should be the principal as which zookeeper service is running.
kafka_client_jaas.conf will be used by clients(producer/consumer) to authenticate to kafka broker. kafka_client_jaas.conf has two sections, take a look : KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="zookeeper";
}; KafkaClient section : As the name suggest it will be used when client wants to communicate to broker.
Value of `serviceName` should be the principal as which kafka is running. You can configure it to use ticket cache or keytab and principal.
Client : This part of JAAS file is only used by clients which are using old consumer api. In old consumer api, consumers need to connect to zookeeper.
In new consumer api clients communicate to brokers instead of zookeeper. Hence while authentication it will use KafkaClient section in kafka_client_jaas.conf.
Producers will always use KafkaClient section in kafka_client_jaas.conf as it will send request to broker node.
For long running kafka clients it recommended to configure JAAS file to use keytab and principal. Please refer below example : KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/storm.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="storm@EXAMPLE.COM";
}; When kerberos is integrated with kafka we see lot of issues while trying to produce/consume messages to kafka. There are instances where client throws a generic error and we don't know what's going wrong. To tackle such conditions I will discuss about the checks which need to be done when you face such issue :
1. make sure kerberos client is installed on the node.
2. Check if you can obtain a ticket of the principal :
- If you want to obtain TGT using password : # kinit <principalName> If you are using keytab check if user has permission to read the keytab :
- Using keytab : # kinit -kt /Path/to/Keytab <PrincipalName> 3. Confirm if you a ticket in ticket cache which is not expired : # klist 4. Confirm if user has read permission on JAAS file which is used. 5. Confirm if client can communicate to kafka broker and port on which broker is listening : # ping <broker.hostname>
# telnet broker.hostname:port 6. Check if you are using correct security protocol `--security-protocol` as configured in server.properties in broker. 7. Try exporting JAAS file and run producer/consumer again : # export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/Path/to/Jaas/File" In order to enable debug for kerberos : # export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/Path/to/Jaas/File -Dsun.security.krb5.debug=true" 8. Still if you are facing authentication issue, try enabling debug for console-producer/console-consumer on kafka client node :
As root user : # vim /usr/hdp/current/kafka-broker/config/tools-log4j.properties log4j.rootLogger=DEBUG, stderr In debug logs you should see which principal, security protocol is used and to which broker request is being send. 9. Once you are confirmed that authentication is working fine it time for you to confirm if user has the required permission on the topic.
If kafka is configured to use kafka acl's, please refer below link : Authorization commands If kafka is configured to use ranger, make sure policy is defined for the topic and principal
... View more
Labels:
09-13-2017
02:56 PM
1 Kudo
1.You
can execute below api to get a list of hosts in your cluster in file
hostcluster.txt # curl -s -u admin:admin
http://ambari:8080/api/v1/hosts|grep host_name|sed -n
's/.*"host_name" :
"\([^\"]*\)".*/\1/p'>hostcluster.txt 2. In the loop you can
write the api which need to run on the nodes : ~~~ while read line ; do j=$line mkdir -p $j done < hostcluster.txt ~~~
-admin:admin : username:password -
Above loop will take each entries from file hostcluster.txt and need to
execute the. 3. In order to install the clients you can use below
API’s. Below API’s will install only 5
clients as mentioned below : ####### Installing HDFS_CLIENT, YARN_CLIENT,
ZOOKEEPER_CLIENT and MAPREDUCE2_CLIENT on "HOSTANAME" as following: +++++++++++ # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install HDFS
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"HDFS_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install YARN
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"YARN_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install
MapReduce2
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"MAPREDUCE2_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"ZOOKEEPER_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j +++++++++++ Where,
-admin:admin : Is username and password for
ambary server. -ambari-hostname : hostname of your ambari server
-$j : is
the variable which will substitute each value from hostcluster.txt NOTE : If you want to add more clients such as spark/oozie etc you
need to change below value from above command : -"context":"Install ZooKeeper
Client" <-- Modify as per the client - component_name":"MAPREDUCE2_CLIENT" <-- Modify as per the client you want to
install 3 Below API’s is to pull the configurations for all the
clients which are installed in step 2 : . ####### Initialize the HDFS_CLIENT, YARN_CLIENT, ZOOKEEPER_CLIENT
and MAPREDUCE2_CLIENT clients on $j +++++++++++ curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install HDFS
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"HDFS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/HDFS_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install YARN
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"YARN"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/YARN_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install
MapReduce2
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"MAPREDUCE2"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/MAPREDUCE2_CLIENT?HostRoles/state=INIT # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X PUT -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"ZOOKEEPER"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/ZOOKEEPER_CLIENT?HostRoles/state=INIT +++++++++++ Where,
-ambari-hostname : hostname of your ambari server
-$j : is
the variable which will take each value from hostcluster.txt If you have added more clients in step 2 then you need to add more
commands in step 3 based on the clients installed in step 2. Below is the scrip to
install : HDFS_CLIENT, YARN_CLIENT, ZOOKEEPER_CLIENT and MAPREDUCE2_CLIENT
1.Create
a .sh file and copy below contents: # vi script.sh ~~~~ curl -s -u admin:admin http://ambari:8080/api/v1/hosts|grep
host_name|sed -n 's/.*"host_name" :
"\([^\"]*\)".*/\1/p'>hostcluster.txt while read line ; do j=$line mkdir -p $j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install HDFS
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"HDFS_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install YARN
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"YARN_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install
MapReduce2
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"MAPREDUCE2_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"ZOOKEEPER_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install HDFS
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"HDFS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/HDFS_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install YARN
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"YARN"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/YARN_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install
MapReduce2
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"MAPREDUCE2"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/MAPREDUCE2_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install ZooKeeper
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"ZOOKEEPER"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/ZOOKEEPER_CLIENT?HostRoles/state=INIT done < hostcluster.txt
2.Make it execuable : # chmod 755 script.sh 3.Execute it. # ./script.sh NOTE: If any clients are already installed on few
nodes you may see below messages, please don’t panic. ~~ }HTTP/1.1 409 Conflict X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Set-Cookie:
AMBARISESSIONID=vam8jlo7ys401q0r5nm10bm71;Path=/;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT User: admin Content-Type: text/plain Content-Length: 250 Server: Jetty(8.1.19.v20160209) ~~
... View more
Labels:
04-23-2017
03:53 PM
6 Kudos
- Please follow steps mentioned in below link in order to get the list of provenance event id's : Nifi- how to get provenance event id in nifi? - In order to get more information about a specific event : For Nifi running in Standalone mode : # curl -i -X GET http://<hostname>:9090/nifi-api/provenance-events/id For Nifi running in clustered mode: # curl -i -X GET http://<hostname>:9090/nifi-api/provenance-events/id?clusterNodeId=<NODE UUID> There may be multiple events with the same event id(one on each node), so you need to specify from which node you want to return that specific event.
... View more
Labels:
03-07-2017
05:05 PM
@yjiang Creating topics with kafka user is a good practice. But If you want to create a topic as a non kafka user in a kerberized environment you need to workaround by following below steps : If you are not using Ranger : 1. Make sure "auto.create.topic.enable = true" 2. Give acl's for the user from which you want to create a topic, for ex : # bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --producer --topic Test-topic 3. Do a kinit as a user from which you want to create topic. 4. Now try to produce messages to topic as that user : # ./kafka-console-producer.sh --broker-list <hostname-broker>:6667 --topic Test-topic --security-protocol PLAINTEXTSASL If you are using Ranger : Instead of point 2 in above steps you will need to add a policy for the topic in ranger. Allow permissions for that user to produce, create, consume. Restart kafka service. Then follow step 3 and 4 as mentioned above.
... View more