Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 992 | 06-04-2025 11:36 PM | |
| 1564 | 03-23-2025 05:23 AM | |
| 780 | 03-17-2025 10:18 AM | |
| 2812 | 03-05-2025 01:34 PM | |
| 1856 | 03-03-2025 01:09 PM |
07-14-2020
09:34 PM
@Sagar1244 Have you copied the Oracle jdbc jar file to /usr/hdp/current/zeppelin-server/interpreter/jdbc/ then configure the zeppelin jdbc interpreter as shown. Restart the interpreter and retest
... View more
07-14-2020
03:35 PM
@saur Previously I had written a comprehensive note on this issue but unfortunately, I can't locate it. I have just completed fresh documentation but since attachments were disabled, please download the document from my adobe share https://documentcloud.adobe.com/link/track?uri=urn:aaid:scds:US:ee72188c-cfb5-48f8-b1cc-e5eae799910b I am sure it will help you keep me posted. Happy hadooping
... View more
07-14-2020
11:53 AM
@SKL Ambari explicitly configures a series of Kafka settings and creates a JAAS configuration file for the Kafka server. It is not necessary to modify these settings but check the below values in Server.properties Listeners listeners=SASL_PLAINTEXT://kafka01.example.com:6667
listeners=PLAINTEXT://your_host:9092, TRACE://:9091, SASL_PLAINTEXT://0.0.0.0:9093 Advertised.listeners A list of listeners to publish to ZooKeeper for clients to use If advertised.listeners is not set, the value for listeners will be used advertised.listeners=SASL_PLAINTEXT://kafka01.example.com:6667 Security.inter.broker.protocol In a Kerberized cluster, brokers are required to communicate over SASL security.inter.broker.protocol=SASL_PLAINTEXT Principal.to.local.class Transforms the Kerberos principals to their local Unix usernames. principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal super.users Specifies user accounts that will acquire all cluster permissions these super users have all permissions that would otherwise need to be added through the kafka-acls.sh script super.users=user:developer1;user:analyst1 JAAS Configuration File for the Kafka Server Enabling Kerberos sets up a JAAS login configuration file for the Kafka server to authenticate the Kafka broker against Kerberos. Usually in /usr/hdp/current/kafka-broker/config/kafka_server_jaas.conf KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/<public_DNS@EXAMPLE.COM";
};
Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/<public_DNS@EXAMPLE.COM";
}; Setting for the Kafka Producer Ambari usually sets the below key-value pair in the server.properties file if nonexistent please add it: security.protocol=SASL_PLAINTEXT JAAS Configuration File for the Kafka Client This file will be used for any client (consumer, producer) that connects to a Kerberos-enabled Kafka cluster. The file is stored at: /usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf Kafka client configuration with keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/ec2-user/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal=""kafka/<public DNS>@EXAMPLE.COM";
}; Kafka client configuration without keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Kafka client configuration for consumers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Check and set the Ranger policy permissions for kafka and ensure that all the Kafka keytab is executable by Kafka Hope that helps
... View more
07-12-2020
04:58 AM
@Anrygzhang After the merger, the licensing models changed. The last free HDP version that is downloadable is HDP 3.1.4 for any version after that unfortunately would you need to be a Cloudera customer. Get the HDP 3.1.4 link HDP.3.1.4 repository Hope that helps
... View more
07-11-2020
03:52 AM
@SKL Please have a look at this response by Vipin Rathor Ranger Policy download with HTTP response 401 Hope that helps
... View more
07-10-2020
02:56 AM
@ARVINDR The "safemode get | grep 'Safe mode is OFF'' returned 1" means the Namenode is not started at all or in safe mode or in safe mode Could you do the following while logged on as hdfs user $ hdfs dfsadmin -safemode get If you see something like below safemode: Call From inqchdpmn1.XXX.com/10.10.31.71 to inqchdpmn1.XXX.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused Then the Namenode is not running the check and upload the hadoop-hdfs-secondarynamenode-xx and hadoop-hdfs-datanode-xxxlogs in /var/log/hadoop/hdfs/ Solution 2 Try starting the HDP namenode service manually Share your findings
... View more
07-08-2020
03:04 PM
@chaithanyaam Yes you go into production with Ambari 2.7.4 and HDP 3.1.4 but you will be liable for whatever goes wrong on your production cluster as Cloudera engineers won't be there to rescue you !! Secondly you won't able to upgrade to the newer versions as access to HDP 3.1.5 or higher as these repositories require authentication synonym of buying Cloudera support please check the support options. Even if you have strong technical Hortonworks experts ultimately you will have to move to a paid subscription. For example, HDP 3.1.5 brought in a lot of improvements especially in hive, Hbase, hdfs see fixed issues Having said that I would encourage you also to see the Cloudera Platform pricing Without Cloudera support if anything goes wrong in your cluster you won't get support or any patch or bug fixes from Cloudera unless you buy a subscription I don't think you would want your production cluster running in this mode. Taking into account the above situation it would be wiser to get a subscription for ONLY your production cluster. CDP has for now been released for AWS and Azure I am not sure about the on-premise offering nor a Sandbox Hope that helps
... View more
07-06-2020
08:27 AM
@chaithanyaam AFAIK there is no free version of CDP. You must be a CDP Data Center customer to access these downloads the current CDP release run in AWS and Azure I am not sure when the on-prem offering will be released. Ambari was dropped in favor of Cloudera Manager in CDP but if you really want to continue practicing or working with Cloudera product the best option is using the HDP 3.1.4 which is the last free offering while downloading the HDP 3.1.5 you will need to be a Cloudera customer. The major difference between the HDP 3.1.4 and 3.1.5 is the Hive Warehouse Connector (HWC) Spark and Hive share a catalog in Hive metastore (HMS) instead of using separate catalogs which wasn't the case with earlier versions. The shared catalog simplifies the use of HWC in reading Hive external tables from Spark, you no longer need to define the table redundantly in the Spark catalog. Also, HDP 3.1.5 introduces HMS table transformations. HMS detects the type of client for interacting with HMS, for example, Hive or Spark, and compares the capabilities of the client with the table requirement. Hope that helps
... View more