Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1581 | 06-04-2025 11:36 PM | |
| 2057 | 03-23-2025 05:23 AM | |
| 968 | 03-17-2025 10:18 AM | |
| 3691 | 03-05-2025 01:34 PM | |
| 2546 | 03-03-2025 01:09 PM |
04-23-2019
04:12 AM
@Shilpa Gokul Please have a look at this HCC document by Neeraj Sabharwal how to setup kafka/ranger without Kerberos This should still be valid with a few tweaks
... View more
04-22-2019
03:55 PM
@Shilpa Gokul Is the plugin enabled for kafka?
... View more
04-21-2019
09:44 AM
@Shilpa Gokul There is information that you should have provided to help members help you resolve your problem. Is you cluster kerberized? What is the command you are executing? Can you share your Ranger/ kafka policy configuration?
... View more
04-18-2019
10:44 AM
1 Kudo
@Afroz Baig Firstly you really don't need to modify manually the krb5.conf as they MUST be identical on all the cluster nodes. What you should do is run scp from the Ambari server where you configured the passwordless connection. Assuming your Ambari Server hosts file entry has all the cluster node and egdenode1 is your target # scp /etc/krb5.conf root@edgnode1:/etc/ This will copy and overwrite the incorrect krb5.conf on the edge node. Assuming you have a user named analyst01 on the edge node who intends to run a job after the update you will do the following as user analyst1 assuming he has his keytab in his home directory # su - analyst01 To determine if he has a valid ticket, in the below he didn't have one # klist klist: No credentials cache found (filename: /tmp/krb5cc_0) Grab a ticket $ kinit -kt /home/analyst01/analyst01.keytab Now he should be able to grab a valid ticket and the klist should validate that $ klist
Ticket cache: FILE:/tmp/krb5cc_1013
Default principal: analyst01-xxx@{REALM}
Valid starting Expires Service principal
04/13/2019 23:25:32 04/14/2019 23:25:32 krbtgt/_host@{REALM}
04/13/2019 23:25:32 04/14/2019 23:25:32 HTTP/_host@{REALM} You don't need to restart any services on the edge node !
... View more
04-17-2019
10:40 AM
@Dennis Suhari The command you are running is wrong , it's the wrong variation you forgot a dash - Note the space before and after the dash !! As the root user run # su - yarn That should work HTH
... View more
04-16-2019
01:54 PM
@Naveenraj Devadoss Did you remember this part? "You'll also need to ensure that the machine where NiFi is running has network access to all of the machines in your Hadoop cluster." Please revert
... View more
04-16-2019
06:37 AM
@Naveenraj Devadoss You need to copy the core-site.xml and hdfs-site.xml from your HDP cluster to the machine where NiFi is running. Then configure PutHDFS so that the configuration resources are "/path/to/core-site.xml,/path/to/hdfs-site.xml". That is all that is required from the NiFi perspective, those files contain all of the information it needs to connect to the Hadoop cluster. You'll also need to ensure that the machine where NiFi is running has network access to all of the machines in your Hadoop cluster. You can look through those config files and find any hostnames and IP addresses and make sure they can be accessed from the machine where NiFi is running. HTH
... View more
04-15-2019
04:47 AM
@Sandeep R It seems to be an SSL issue can you validate your LDAP, the port 636 is LDAPS and 389 is for LDAP. To enable LDAPS, you must install a certificate that meets the following requirements: The LDAPS certificate is located in the Local Computer's Personal certificate store (programmatically known as the computer's MY certificate store). A private key that matches the certificate is present in the Local Computer's store and is correctly associated with the certificate. The private key must not have strong private key protection enabled. The Enhanced Key Usage extension includes the Server Authentication (1.3.6.1.5.5.7.3.1) object identifier (also known as OID). The Active Directory fully qualified domain name of the domain controller (for example, DC01.DOMAIN.COM) must appear in one of the following places: The Common Name (CN) in the Subject field. DNS entry in the Subject Alternative Name extension. The certificate was issued by a CA that the domain controller and the LDAPS clients trust. Trust is established by configuring the clients and the server to trust the root CA to which the issuing CA chains. You must use the Schannel cryptographic service provider (CSP) to generate the key. Hope that helps
... View more
04-13-2019
05:17 PM
@Ricardo Ramos I have documented a walk through which was successful in reproducing your issue please go through it and let me know How to start a HDP 3.0 Sandbox_Part2.pdf How to start a HDP 3.0 Sandbox_Part1.pdf
... View more
04-12-2019
03:42 PM
@Vasanth Reddy Spark SQL data source can read data from other databases using JDBC.JDBC connection properties such as user and password are normally provided as connection properties for logging into the data sources. In the example below I am using MySQL, so will need to have the Postgres drivers in place Sample Program import org.apache.spark.sql.SQLContext
val sqlcontext = new org.apache.spark.sql.SQLContext(sc)
val dataframe_mysql = sqlcontext.read.format("jdbc").option("url", "jdbc:mysql://mbarara.com:3306/test").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "emp").option("user", "root").option("password", "welcome1").load()
dataframe_mysql.show() Hope that helps
... View more