Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 609 | 06-04-2025 11:36 PM | |
| 1175 | 03-23-2025 05:23 AM | |
| 580 | 03-17-2025 10:18 AM | |
| 2185 | 03-05-2025 01:34 PM | |
| 1373 | 03-03-2025 01:09 PM |
07-14-2020
11:53 AM
@SKL Ambari explicitly configures a series of Kafka settings and creates a JAAS configuration file for the Kafka server. It is not necessary to modify these settings but check the below values in Server.properties Listeners listeners=SASL_PLAINTEXT://kafka01.example.com:6667
listeners=PLAINTEXT://your_host:9092, TRACE://:9091, SASL_PLAINTEXT://0.0.0.0:9093 Advertised.listeners A list of listeners to publish to ZooKeeper for clients to use If advertised.listeners is not set, the value for listeners will be used advertised.listeners=SASL_PLAINTEXT://kafka01.example.com:6667 Security.inter.broker.protocol In a Kerberized cluster, brokers are required to communicate over SASL security.inter.broker.protocol=SASL_PLAINTEXT Principal.to.local.class Transforms the Kerberos principals to their local Unix usernames. principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal super.users Specifies user accounts that will acquire all cluster permissions these super users have all permissions that would otherwise need to be added through the kafka-acls.sh script super.users=user:developer1;user:analyst1 JAAS Configuration File for the Kafka Server Enabling Kerberos sets up a JAAS login configuration file for the Kafka server to authenticate the Kafka broker against Kerberos. Usually in /usr/hdp/current/kafka-broker/config/kafka_server_jaas.conf KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/<public_DNS@EXAMPLE.COM";
};
Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/<public_DNS@EXAMPLE.COM";
}; Setting for the Kafka Producer Ambari usually sets the below key-value pair in the server.properties file if nonexistent please add it: security.protocol=SASL_PLAINTEXT JAAS Configuration File for the Kafka Client This file will be used for any client (consumer, producer) that connects to a Kerberos-enabled Kafka cluster. The file is stored at: /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf Kafka client configuration with keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal=""kafka/<public DNS>@EXAMPLE.COM";
}; Kafka client configuration without keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Kafka client configuration for consumers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Check and set the Ranger policy permissions for kafka and ensure that all the Kafka keytab is executable by Kafka Hope that helps
... View more
07-13-2020
12:31 AM
@Kaav, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question.
... View more
07-12-2020
04:58 AM
@Anrygzhang After the merger, the licensing models changed. The last free HDP version that is downloadable is HDP 3.1.4 for any version after that unfortunately would you need to be a Cloudera customer. Get the HDP 3.1.4 link HDP.3.1.4 repository Hope that helps
... View more
06-15-2020
11:44 PM
Depends what you are trying todo. I was after a test system to see what happened at expiry time of the kerberos ticket. I could never get the values set by the web interface to work. In the end I logged into the cloudera node, used kadmin.local and set the max lifetimes for a kerberos ticket for the appropiate principal.
... View more
06-04-2020
07:16 AM
hello a new thread has been created and you have been tagged in it with heading Zeppelin UI returns 503 error everytime
... View more
05-31-2020
11:45 PM
Hi shelton, please write me the command that recovers snapshot data in such a way by retaining its ownership,timestamp,permissions and acls. Looking forward to hear from you. thanks
... View more
05-31-2020
10:28 PM
Hi @Shelton , Thanks for your response and yes i have tried re-genrating key-tab files, But no luck. And the above two servers are master nodes (Zookeeper, journal nodes and other master services are running). Please let me know if you need any other details. Thanks, Vinod
... View more
05-31-2020
09:18 AM
1 Kudo
Hi @Shelton , Was you get chance to look into this issue, needed help on this. Thanks
... View more
05-14-2020
03:06 PM
1 Kudo
@ansharma1 You can run the following query in Ambari DB SELECT view_instance_id,resource_id,view_name, cluster_handle,cluster_type FROM viewinstance; Above query will show that the view which is causing the problem might not be associated with any cluster_handle. (cluster_handle is basically the cluster_id, which you can see in the clusters table). If cluster_handle for a view is not correctly updated then you might see that kind of message: org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance If you want to use the same old View to work fine (instead of creating a new Instance of that view) then you might have to make sure to update the cluster_handle for that view instance is set correctly. Like 1. Take ambari DB dump (latest dump for backup), As we are going to change the DB manually. 2. Stop ambari-server 3. Run the following queries in the amabri DB. NOTE: Following is just a dummy query the values for 'cluster_handle' and 'view_instance_id' in that query may vary. UPDATE viewinstance SET cluster_handle = 4 WHERE view_instance_id=3;
... View more
05-12-2020
07:03 PM
Hello, I have recently encountered a similar problem, it happens when I use hive to insert data into my table, then my cluster is hdp2.7.2, it is a newly built cluster, then when I check my namenode log I find this problem, but my active and standby are normal, there is no problem at all 2020-05-13 09:33:15,484 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 746 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 28 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.9:48350 Call#3590 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 28 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 219 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.12:52988 Call#3987 Retry#0: output error 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 176 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.9:54838 Call#71 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 176 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,485 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 500 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.12:52988 Call#3988 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 219 on 8020 caught an exception
... View more