Member since
06-27-2019
147
Posts
9
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2461 | 01-31-2022 08:42 AM | |
628 | 11-24-2021 12:11 PM | |
1058 | 11-24-2021 12:05 PM | |
1989 | 10-08-2019 10:00 AM | |
2517 | 10-07-2019 12:08 PM |
10-18-2019
08:06 AM
Hi @Adarsh_ks Can you try adding to Client serviceName="zookeeper" Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="<keytab_path>"
serviceName="zookeeper"
principal="<principal>";
}; Let us know the results. Thanks, Manuel.
... View more
10-15-2019
07:18 AM
Hi @Peruvian81 It's difficult to suggest if there are no details about the cluster usage, but it would be useful to start reviewing the below article that provides Kafka best practices. https://community.cloudera.com/t5/Community-Articles/Kafka-Best-Practices/ta-p/249371 I hope that helps. Regards, Manuel.
... View more
10-08-2019
10:00 AM
@Peruvian81 You can try below flow which is just for testing purposes: Basically I have a tailFile processor passing data through splitText then these messages are sent to PublishKafka_1_0(use this processor for this test), finally I created a consumer to consume data from the same topic configured in PublishKafka_1_0 storing the data in the file system with putFile. In putFile I have configured Maximum File Count to 10, to avoid excessive space usage in the file system.
... View more
10-07-2019
12:08 PM
Hi @shashank_naresh Did you test your connectivity with the sandbox from your host?, if not, you can try below commands: ping sandbox-hdp.hortonworks.com Also, you can test if the port is reachable by using: telnet sandbox-hdp.hortonworks.com 6667 Expected output from telnet: PC:~ mrodriguez$ telnet 172.25.40.164 6667
Trying 172.25.40.164...
Connected to c489-xx.labs.xxxx.xxxx.com.
Escape character is '^]'. Cheers.
... View more
10-07-2019
11:59 AM
@Peruvian81 You can start testing a flow like below: tailFile --> PublishKafka_1_0(2_0 depending on your Kafka version) In publishKafka you can use a configuration example like below: Ensure that the principal has Ranger authorization to publish data to the topic. In Kafka brokers, provide the brokers FQDM, do not use localhost or IPs
... View more
10-03-2019
12:34 PM
Hi @Seaport If you're using ambari, Enable Atlas Hook should take case of that. In addition to that, follow the steps below: cp /usr/hdp/current/atlas-server/conf/atlas-application.properties /etc/hbase/conf get a valid ticket from atlas user export HBASE_CONF_DIR=/usr/hdp/current/hbase-client/conf In ambari > hbase > advanced hbase-site add: hbase.coprocessor.master.classes=org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor, org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor (Restart required). Finally run: /usr/hdp/current/atlas-server/hook-bin/import-hbase.sh
... View more
10-03-2019
12:24 PM
Hi @Peruvian81 Kafka has multiple ways to be secured: SSL Kerberos PLAINTEXT No No SSL Yes No SASL_PLAINTEXT No Yes SASL_SSL Yes Yes If you already are using Kerberos, you can check the document below: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/kerberos_kafka_configuring_kafka_for_kerberos_using_ambari.html For your clients, you can use below command line depending of the Kafka version: consumer example: bin/kafka-console-consumer.sh --bootstrap-server <kafkaHost>:<kafkaPort> --topic <topicName> --security-protocol SASL_PLAINTEXT For newer versions, consumer example: bin/kafka-console-consumer.sh --topic <topicName> --bootstrap-server <brokerHost>:<brokerPort> --consumer-property security.protocol=SASL_PLAINTEXT * Make sure to get a valid Kerberos ticket before running these commands (kinit -kt keytab principal) ** Ensure the Kerberos principal has permissions to publish/consume data from/to the selected topic
... View more
09-26-2019
07:44 AM
@Peruvian81 You can try below command for the consumer: ./kafka-console-consumer.sh --bootstrap-server w01.s03.hortonweb.com:6667 --topic PruebaNYC --consumer-property security.protocol=SASL_PLAINTEXT --from-beginning If that solves your issue, kindly put this thread as solved. Thanks.
... View more
09-26-2019
06:10 AM
@Peruvian81 According to the output, the broker is listening on SASL_PLAINTEXT (kerberos) and host w01.s03.hortonweb.com. We have to specify the connection type we are doing from our clients to Kafka, by default the connection is PLAINTEXT, depending on the Kafka version in use, you should try the following: 1. Get a valid Kerberos token "kinit -kt <keytab> <principal>" 2. Execute the command below for Kafka version until 1.0.0 ./kafka-console-producer.sh --broker-list w01.s03.hortonweb.com:6667 --topic PruebaKafka --security-protocol SASL_PLAINTEXT Kafka 2.0 onwards: ./kafka-console-producer.sh --broker-list w01.s03.hortonweb.com:6667 --topic PruebaKafka --producer-property security.protocol=SASL_PLAINTEXT Test and let us know.
... View more
09-25-2019
11:04 AM
@Peruvian81 Are you using kerberos? If yes, make sure you have a valid ticket in order to avoid below exception: 2019-09-25 16:22:54,367 - WARN [main-SendThread(m01.s02.hortonweb.com:2181):ZooKeeperSaslClient$ClientCallbackHandler@496] - Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make
sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine
if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <ke
ytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's cl
ock is in sync with this host's clock. From the command line, please add the broker id: get /brokers/ids/<brokerID> Example: ZK_HOME/zookeeper-client/bin/zkCli.sh -server host:2181 get /brokers/ids/1001 If you don't know your current ids, you can get them by using: ZK_HOME/zookeeper-client/bin/zkCli.sh -server host:2181 ls /brokers/ids
... View more