Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 929 | 06-04-2025 11:36 PM | |
| 1536 | 03-23-2025 05:23 AM | |
| 762 | 03-17-2025 10:18 AM | |
| 2751 | 03-05-2025 01:34 PM | |
| 1813 | 03-03-2025 01:09 PM |
02-08-2019
08:45 AM
@ram sriram If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. Can you tag me for "Can you please send me a document for Ambari installation on ubuntu thread" so I see the information you already received.
... View more
02-08-2019
12:08 AM
@Daniel Nguyen The error being thrown " The page then show me Nifi logo with message: ProcessingException: java.io.IOException: HTTPS hostname wrong: should be <my_host_name.com>" Corresponds to your entry which looks incorrect typo error please can you resolve that and retry HTH
... View more
02-07-2019
11:59 PM
1 Kudo
@Howchoy You have 2 threads open for the same issue https://community.hortonworks.com/questions/239915/untrusted-proxy-in-kerberized-nifi.html Can you validate that the solution I gave earlier worked for the password generation because I see in the new thread you seem to have successfully generated the password if so please accept the answer and close the old thread? f you found this answer addressed your question, please take a moment to log
in and click the "accept" link on the answer.
... View more
02-07-2019
11:26 PM
1 Kudo
@Howchoy Can you try something like it worked when I tried setting up once I have tweaked it a bit Iif I remember it needed 13 characters export JAVA_HOME=/usr/jdk64/jdk1.8.0_112
./files/nifi-toolkit-*/bin/tls-toolkit.sh client -c $(hostname -f)-D "CN=hadoopadmin, OU=LAB.HORTONWORKS.NET"-p 10443-t Welcome2018nifihdf3 -T pkcs12 Please let me know
... View more
02-07-2019
11:06 PM
@hoda moradi Just omit the SSL_SASL entry in the server.properties listeners=PLAINTEXT://0.0.0.0:9092,SASL://0.0.0.0:9093
advertised.listeners=PLAINTEXT://FQDN_Broker:9092,SASL://FQDN_Broker:9093 HTH
... View more
02-07-2019
11:02 PM
2 Kudos
@Pavel Orekhov You can change the log level while logged on as hdfs with the below steps, you don't need to restart the namenode Get the current log level $ hadoop daemonlog -getlevel {namenode_host}:50070 BlockStateChange Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
Submitted Log Name: BlockStateChange
Log Class: org.apache.commons.logging.impl.Log4JLogger
Effective Level: INFO Change to DEBUG $ hadoop daemonlog -setlevel {namenode_host}:50070 BlockStateChange DEBUG Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=DEBUG
Submitted Log Name: BlockStateChange
Log Class: org.apache.commons.logging.impl.Log4JLogger
Submitted Level: DEBUG
Setting Level to DEBUG ...
Effective Level: DEBUG Validate DEBUG mode $ hadoop daemonlog -getlevel {namenode_host}:50070 BlockStateChange Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
Submitted Log Name: BlockStateChange Log
Class: org.apache.commons.logging.impl.Log4JLogger
Effective Level: DEBUG You should be able to notice the logging level in namenode.log has been updated, without restarting the service. After finishing your diagnostics you can reset the logging level back to INFO Reset to INFO $ hadoop daemonlog -setlevel {namenode_host}:50070 BlockStateChange INFO Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=INFO
Submitted Log Name: BlockStateChange
Log Class: org.apache.commons.logging.impl.Log4JLogger
Submitted Level: INFO
Setting Level to INFO ...
Effective Level: INFO Validate INFO $ hadoop daemonlog -getlevel {namenode_host}:50070 BlockStateChange Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
Submitted Log Name: BlockStateChange
Log Class: org.apache.commons.logging.impl.Log4JLogger
Effective Level: INFO There you go If you found this answer addressed your question, please take a moment to log
in and click the "accept" link on the answer.
... View more
02-07-2019
10:29 PM
@hoda moradi Your kafka_jaas.conf and contradicting entries 4 in number can you back up the current file and re-adjust the one I have attached on all the brokers if multimode. Below is functioning SSL, Kerberos config #########################################################
# server.properties
#########################################################
listeners=PLAINTEXT://0.0.0.0:9092,SSL:0.0.0.0:9093,SASL_SSL://0.0.0.0:9094
advertised.listeners=PLAINTEXT://FQDN_Broker:9092,SSL://FQDN_Broker:9093,SASL_SSL://FQDN_Broker:9092
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka Client #########################################################
# kafka_client_jaas.conf:
#########################################################
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Server #########################################################
# kafka_jaas.conf
#########################################################
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
useTicketCache=false
serviceName="kafka"
principal="kafka/_host@EXAMPLE.COM";
}; After these steps restart the Kafka broker(s) please revert
... View more
02-07-2019
09:40 PM
@hoda moradi Okay I am already seeing issues with your kafka_jaas.conf there are too many entries. Can tokenize your server.properties and share the entries listeners advertised.listeners sasl.enabled.mechanisms sasl.kerberos.service.name Is it an HDP cluster if so version or standalone kafka cluster (how many nodes)
... View more
02-07-2019
09:11 PM
@hoda moradi Have you secured your kafka with SSL and Keberos? Was it working before?
... View more
02-07-2019
08:27 PM
@hoda moradi Can you check these 2 properties in server.properties Please follow the below steps. Add the following lines in server.properties for the brokers file: listeners=PLAINTEXT://host.name:port
advertised.listeners=PLAINTEXT://host.name:port
where host.name is the IP address or host name of the Kafka broker. Restart the Kafka brokers and test.
... View more