Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 497 | 06-04-2025 11:36 PM | |
| 1038 | 03-23-2025 05:23 AM | |
| 544 | 03-17-2025 10:18 AM | |
| 2040 | 03-05-2025 01:34 PM | |
| 1268 | 03-03-2025 01:09 PM |
07-14-2020
11:53 AM
@SKL Ambari explicitly configures a series of Kafka settings and creates a JAAS configuration file for the Kafka server. It is not necessary to modify these settings but check the below values in Server.properties Listeners listeners=SASL_PLAINTEXT://kafka01.example.com:6667
listeners=PLAINTEXT://your_host:9092, TRACE://:9091, SASL_PLAINTEXT://0.0.0.0:9093 Advertised.listeners A list of listeners to publish to ZooKeeper for clients to use If advertised.listeners is not set, the value for listeners will be used advertised.listeners=SASL_PLAINTEXT://kafka01.example.com:6667 Security.inter.broker.protocol In a Kerberized cluster, brokers are required to communicate over SASL security.inter.broker.protocol=SASL_PLAINTEXT Principal.to.local.class Transforms the Kerberos principals to their local Unix usernames. principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal super.users Specifies user accounts that will acquire all cluster permissions these super users have all permissions that would otherwise need to be added through the kafka-acls.sh script super.users=user:developer1;user:analyst1 JAAS Configuration File for the Kafka Server Enabling Kerberos sets up a JAAS login configuration file for the Kafka server to authenticate the Kafka broker against Kerberos. Usually in /usr/hdp/current/kafka-broker/config/kafka_server_jaas.conf KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/<public_DNS@EXAMPLE.COM";
};
Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/<public_DNS@EXAMPLE.COM";
}; Setting for the Kafka Producer Ambari usually sets the below key-value pair in the server.properties file if nonexistent please add it: security.protocol=SASL_PLAINTEXT JAAS Configuration File for the Kafka Client This file will be used for any client (consumer, producer) that connects to a Kerberos-enabled Kafka cluster. The file is stored at: /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf Kafka client configuration with keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal=""kafka/<public DNS>@EXAMPLE.COM";
}; Kafka client configuration without keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Kafka client configuration for consumers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Check and set the Ranger policy permissions for kafka and ensure that all the Kafka keytab is executable by Kafka Hope that helps
... View more
07-12-2020
04:58 AM
@Anrygzhang After the merger, the licensing models changed. The last free HDP version that is downloadable is HDP 3.1.4 for any version after that unfortunately would you need to be a Cloudera customer. Get the HDP 3.1.4 link HDP.3.1.4 repository Hope that helps
... View more
07-11-2020
03:52 AM
@SKL Please have a look at this response by Vipin Rathor Ranger Policy download with HTTP response 401 Hope that helps
... View more
07-10-2020
02:56 AM
@ARVINDR The "safemode get | grep 'Safe mode is OFF'' returned 1" means the Namenode is not started at all or in safe mode or in safe mode Could you do the following while logged on as hdfs user $ hdfs dfsadmin -safemode get If you see something like below safemode: Call From inqchdpmn1.XXX.com/10.10.31.71 to inqchdpmn1.XXX.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused Then the Namenode is not running the check and upload the hadoop-hdfs-secondarynamenode-xx and hadoop-hdfs-datanode-xxxlogs in /var/log/hadoop/hdfs/ Solution 2 Try starting the HDP namenode service manually Share your findings
... View more
07-08-2020
03:04 PM
@chaithanyaam Yes you go into production with Ambari 2.7.4 and HDP 3.1.4 but you will be liable for whatever goes wrong on your production cluster as Cloudera engineers won't be there to rescue you !! Secondly you won't able to upgrade to the newer versions as access to HDP 3.1.5 or higher as these repositories require authentication synonym of buying Cloudera support please check the support options. Even if you have strong technical Hortonworks experts ultimately you will have to move to a paid subscription. For example, HDP 3.1.5 brought in a lot of improvements especially in hive, Hbase, hdfs see fixed issues Having said that I would encourage you also to see the Cloudera Platform pricing Without Cloudera support if anything goes wrong in your cluster you won't get support or any patch or bug fixes from Cloudera unless you buy a subscription I don't think you would want your production cluster running in this mode. Taking into account the above situation it would be wiser to get a subscription for ONLY your production cluster. CDP has for now been released for AWS and Azure I am not sure about the on-premise offering nor a Sandbox Hope that helps
... View more
07-06-2020
08:27 AM
@chaithanyaam AFAIK there is no free version of CDP. You must be a CDP Data Center customer to access these downloads the current CDP release run in AWS and Azure I am not sure when the on-prem offering will be released. Ambari was dropped in favor of Cloudera Manager in CDP but if you really want to continue practicing or working with Cloudera product the best option is using the HDP 3.1.4 which is the last free offering while downloading the HDP 3.1.5 you will need to be a Cloudera customer. The major difference between the HDP 3.1.4 and 3.1.5 is the Hive Warehouse Connector (HWC) Spark and Hive share a catalog in Hive metastore (HMS) instead of using separate catalogs which wasn't the case with earlier versions. The shared catalog simplifies the use of HWC in reading Hive external tables from Spark, you no longer need to define the table redundantly in the Spark catalog. Also, HDP 3.1.5 introduces HMS table transformations. HMS detects the type of client for interacting with HMS, for example, Hive or Spark, and compares the capabilities of the client with the table requirement. Hope that helps
... View more
05-29-2020
09:31 AM
@kvinod Sorry I have been away for quite a while. What's the relation between hostname001 and hostname003 or simply what services are running on these 2 servers? Have you tried regenerating the keytabs?
... View more
04-07-2020
01:42 PM
@SHADA please take note this thread was closed can you open a new thread and attach the errors,logs or screenshot of the error you are encountering and remember to be precise on the version of the sandbox whether it VMware,Docker or Virtualbox Tag me in the new thread
... View more
03-26-2020
04:01 AM
@ARVINDR There is something you got to investigate. The correct output should look like below. I remember answering such a question something I need to locate the solution [zk: localhost:2181(CONNECTED) o] ls /hiveserver2
[serverUri=hdp2.test.com:10000;version=1.2.1000.2.6.2.0-205;sequence=0000000061]
[zk: localhost:2181(CONNECTED) 1] Can you get the ACL for that znode [zk: localhost:2181(CONNECTED) 0] getAcl /hiveserver2
'world,'anyone
: cdrwa
[zk: localhost:2181(CONNECTED) 1] The above is for a non-kerberized cluster . Please revert
... View more