Member since
08-29-2016
30
Posts
15
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3797 | 03-31-2017 05:31 AM | |
31507 | 03-31-2017 05:14 AM |
08-03-2018
12:19 PM
5 Kudos
In kafka there are three types of communication : 1. Between brokers 2. Between client and broker. 3. Broker to zookeeper In order to communicate in kerberos enabled cluster one needs to authenticate itself. So when broker will try to communicate with other broker in the cluster it will need to authenticate first. Same for the clients communicating to the brokers. In Kafka, JAAS files is used for authentication. Lets first understand what's there in JAAS file. In kafka there are two JAAS files : 1. kafka_jaas.conf
2. kafka_client_jaas.conf.
Let's discuss about kafka_jaas.conf. This file will be used for authentication when a broker in a cluster tries to communicate with other brokers in cluster. Take a look at the content and understand its purpose : KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM";
}; KafkaServer section will be used by broker for authentication when it tries to communicate with other brokers in cluster. It should always be configured to use keytab and principal. Value of `serviceName` should be the principal as which kafka is running. `storeKey=true` : significance of setting the storekey parameter to true in jaas.conf Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM";
}; Client section in kafka_jaas.conf will be used for authentication when broker wants to communicate with zookeeper. It should always be configured to use keytab and principal. Value of `serviceName` should be the principal as which zookeeper service is running.
kafka_client_jaas.conf will be used by clients(producer/consumer) to authenticate to kafka broker. kafka_client_jaas.conf has two sections, take a look : KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="zookeeper";
}; KafkaClient section : As the name suggest it will be used when client wants to communicate to broker.
Value of `serviceName` should be the principal as which kafka is running. You can configure it to use ticket cache or keytab and principal.
Client : This part of JAAS file is only used by clients which are using old consumer api. In old consumer api, consumers need to connect to zookeeper.
In new consumer api clients communicate to brokers instead of zookeeper. Hence while authentication it will use KafkaClient section in kafka_client_jaas.conf.
Producers will always use KafkaClient section in kafka_client_jaas.conf as it will send request to broker node.
For long running kafka clients it recommended to configure JAAS file to use keytab and principal. Please refer below example : KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/storm.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="storm@EXAMPLE.COM";
}; When kerberos is integrated with kafka we see lot of issues while trying to produce/consume messages to kafka. There are instances where client throws a generic error and we don't know what's going wrong. To tackle such conditions I will discuss about the checks which need to be done when you face such issue :
1. make sure kerberos client is installed on the node.
2. Check if you can obtain a ticket of the principal :
- If you want to obtain TGT using password : # kinit <principalName> If you are using keytab check if user has permission to read the keytab :
- Using keytab : # kinit -kt /Path/to/Keytab <PrincipalName> 3. Confirm if you a ticket in ticket cache which is not expired : # klist 4. Confirm if user has read permission on JAAS file which is used. 5. Confirm if client can communicate to kafka broker and port on which broker is listening : # ping <broker.hostname>
# telnet broker.hostname:port 6. Check if you are using correct security protocol `--security-protocol` as configured in server.properties in broker. 7. Try exporting JAAS file and run producer/consumer again : # export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/Path/to/Jaas/File" In order to enable debug for kerberos : # export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/Path/to/Jaas/File -Dsun.security.krb5.debug=true" 8. Still if you are facing authentication issue, try enabling debug for console-producer/console-consumer on kafka client node :
As root user : # vim /usr/hdp/current/kafka-broker/config/tools-log4j.properties log4j.rootLogger=DEBUG, stderr In debug logs you should see which principal, security protocol is used and to which broker request is being send. 9. Once you are confirmed that authentication is working fine it time for you to confirm if user has the required permission on the topic.
If kafka is configured to use kafka acl's, please refer below link : Authorization commands If kafka is configured to use ranger, make sure policy is defined for the topic and principal
... View more
Labels:
09-13-2017
02:56 PM
1 Kudo
1.You
can execute below api to get a list of hosts in your cluster in file
hostcluster.txt # curl -s -u admin:admin
http://ambari:8080/api/v1/hosts|grep host_name|sed -n
's/.*"host_name" :
"\([^\"]*\)".*/\1/p'>hostcluster.txt 2. In the loop you can
write the api which need to run on the nodes : ~~~ while read line ; do j=$line mkdir -p $j done < hostcluster.txt ~~~
-admin:admin : username:password -
Above loop will take each entries from file hostcluster.txt and need to
execute the. 3. In order to install the clients you can use below
API’s. Below API’s will install only 5
clients as mentioned below : ####### Installing HDFS_CLIENT, YARN_CLIENT,
ZOOKEEPER_CLIENT and MAPREDUCE2_CLIENT on "HOSTANAME" as following: +++++++++++ # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install HDFS
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"HDFS_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install YARN
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"YARN_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install
MapReduce2
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"MAPREDUCE2_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X POST -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"ZOOKEEPER_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j +++++++++++ Where,
-admin:admin : Is username and password for
ambary server. -ambari-hostname : hostname of your ambari server
-$j : is
the variable which will substitute each value from hostcluster.txt NOTE : If you want to add more clients such as spark/oozie etc you
need to change below value from above command : -"context":"Install ZooKeeper
Client" <-- Modify as per the client - component_name":"MAPREDUCE2_CLIENT" <-- Modify as per the client you want to
install 3 Below API’s is to pull the configurations for all the
clients which are installed in step 2 : . ####### Initialize the HDFS_CLIENT, YARN_CLIENT, ZOOKEEPER_CLIENT
and MAPREDUCE2_CLIENT clients on $j +++++++++++ curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install HDFS
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"HDFS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/HDFS_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install YARN
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"YARN"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/YARN_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install
MapReduce2
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"MAPREDUCE2"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/MAPREDUCE2_CLIENT?HostRoles/state=INIT # curl -u admin:admin -H "X-Requested-By:ambari"
-i -X PUT -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"ZOOKEEPER"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/ZOOKEEPER_CLIENT?HostRoles/state=INIT +++++++++++ Where,
-ambari-hostname : hostname of your ambari server
-$j : is
the variable which will take each value from hostcluster.txt If you have added more clients in step 2 then you need to add more
commands in step 3 based on the clients installed in step 2. Below is the scrip to
install : HDFS_CLIENT, YARN_CLIENT, ZOOKEEPER_CLIENT and MAPREDUCE2_CLIENT
1.Create
a .sh file and copy below contents: # vi script.sh ~~~~ curl -s -u admin:admin http://ambari:8080/api/v1/hosts|grep
host_name|sed -n 's/.*"host_name" :
"\([^\"]*\)".*/\1/p'>hostcluster.txt while read line ; do j=$line mkdir -p $j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install HDFS
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"HDFS_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install YARN
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"YARN_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install
MapReduce2
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"MAPREDUCE2_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X POST -d '{"RequestInfo":{"context":"Install
ZooKeeper
Client"},"Body":{"host_components":[{"HostRoles":{"component_name":"ZOOKEEPER_CLIENT"}}]}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts?Hosts/host_name=$j curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install HDFS
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"HDFS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/HDFS_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install YARN
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"YARN"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/YARN_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install
MapReduce2
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"MAPREDUCE2"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/MAPREDUCE2_CLIENT?HostRoles/state=INIT curl -u admin:admin -H "X-Requested-By:ambari" -i
-X PUT -d '{"RequestInfo":{"context":"Install ZooKeeper
Client","operation_level":{"level":"HOST_COMPONENT","cluster_name":"rest","host_name":"$j","service_name":"ZOOKEEPER"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://ambari-hostname:8080/api/v1/clusters/rest/hosts/$j/host_components/ZOOKEEPER_CLIENT?HostRoles/state=INIT done < hostcluster.txt
2.Make it execuable : # chmod 755 script.sh 3.Execute it. # ./script.sh NOTE: If any clients are already installed on few
nodes you may see below messages, please don’t panic. ~~ }HTTP/1.1 409 Conflict X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Set-Cookie:
AMBARISESSIONID=vam8jlo7ys401q0r5nm10bm71;Path=/;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT User: admin Content-Type: text/plain Content-Length: 250 Server: Jetty(8.1.19.v20160209) ~~
... View more
Labels:
08-02-2017
05:25 AM
@Sanjib Behera Is this the same issue as provided in screenshot ? If yes, Could you please check "listeners" section in server.properties. As you are not using kerberos while consuming value of "listeners" should contain PLAINTEXT. If this is a different error then please provide complete traceback message.
... View more
06-30-2017
10:02 AM
How does the AmbariKerberizationWizard generate samaccountnames?
... View more
Labels:
- Labels:
-
Apache Ambari
06-30-2017
10:01 AM
I cannot see the audits for hdfs user being audited in ranger audit. How can I see the audits for the same ?
... View more
Labels:
- Labels:
-
Apache Ranger
06-29-2017
03:13 PM
I am trying to run a word count topology to test storm. But getting following error in supervisor.log 2017-06-27 08:29:55 b.s.config [INFO] SET worker-user 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a xbblwv5
2017-06-27 08:29:55 b.s.d.supervisor [INFO] Running as user:storm command:("/usr/hdp/2.2.6.0-2800/storm/bin/worker-launcher" “storm” "worker" "/disk/hadoop/storm/workers/6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a" "/disk/hadoop/storm/workers/6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a/storm-worker-script.sh")
2017-06-27 08:29:55 b.s.util [WARN] Worker Process 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a:Invalid permissions on worker-launcher binary.
2017-06-27 08:29:55 b.s.util [WARN] Worker Process 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a:The configured nodemanager group 501 is different from the group of the executable 0
2017-06-27 08:29:55 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:55 b.s.d.supervisor [INFO] Worker Process 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a exited with code: 22
2017-06-27 08:29:55 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:56 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:56 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:57 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:57 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:58 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
2017-06-27 08:29:58 b.s.d.supervisor [INFO] 6f0c4ad9-a6c2-4c4c-9e48-4f9da485bd2a still hasn't started
... View more
Labels:
- Labels:
-
Apache Storm
05-07-2017
07:24 AM
1 Kudo
@Connor O'Neal In order to login to zookeeper node you can use below command # /usr/hdp/current/zookeeper-server/bin.zkCli.sh -server <hostname>:2181 Although the kafka delete command may seem like it deletes topics and returns successfully, in fact behind the scene it creates - " /admin/delete_topics/<topic> " node in zookeeper and only triggers deletion. We can verify this by checking the same via "zkCli.sh" as below: # cd /usr/hdp/current/zookeeper-server/bin/
# ./zkCli.sh -server <hostname>:2181
[zk: <broker-hostname>:2181 (connected)] ls /
[zk: <broker-hostname>:2181 (connected)] ls /admin [zk: <broker-hostname>:2181 (connected)] ls /admin/delete_topics As soon as broker sees this update, the topic no longer accepts any new produce/consume requests and eventually the topic will be deleted. 1. Topic Command issues topic deletion by creating a new admin path - "/admin/delete_topics/<topic>".
2. The controller listens for child changes on /admin/delete_topic and starts topic deletion for the respective topics
3. The controller has a background thread that handles topic deletion. The purpose of having this background thread is to accommodate the TTL feature, when we have it. This thread is signaled whenever deletion for a topic needs to be started or resumed. Currently, a topic's deletion can be started only by the onPartitionDeletion callback on the controller. In the future, it can be triggered based on the configured TTL for the topic.
A topic will be ineligible for deletion in the following scenarios -
a. broker hosting one of the replicas for that topic goes down
b. partition reassignment for partitions of that topic is in progress
c. preferred replica election for partitions of that topic is in progress (though this is not strictly required since it holds the controller lock for the entire duration from start to end)
4. Topic deletion is resumed when -
a. broker hosting one of the replicas for that topic is started
b. preferred replica election for partitions of that topic completes
c. partition reassignment for partitions of that topic completes
... View more
04-23-2017
03:58 PM
Please refer below link in order to get more information about a specific provenance event for nifi which is running in clustered mode : - How to get information of a specific provenance event when nifi is running in standalone/clustered mode ?
... View more
04-23-2017
03:53 PM
6 Kudos
- Please follow steps mentioned in below link in order to get the list of provenance event id's : Nifi- how to get provenance event id in nifi? - In order to get more information about a specific event : For Nifi running in Standalone mode : # curl -i -X GET http://<hostname>:9090/nifi-api/provenance-events/id For Nifi running in clustered mode: # curl -i -X GET http://<hostname>:9090/nifi-api/provenance-events/id?clusterNodeId=<NODE UUID> There may be multiple events with the same event id(one on each node), so you need to specify from which node you want to return that specific event.
... View more
Labels: