Member since
04-05-2016
14
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
938 | 10-04-2016 03:28 AM | |
220 | 10-01-2016 02:55 AM | |
600 | 06-21-2016 05:32 PM |
12-22-2016
03:07 AM
Thanks Eyad, That's what i was looking for. As per the document it does says it needs to run in HTTPS mode for ldap authentication.
... View more
12-22-2016
02:55 AM
1 Kudo
I have performed all the steps mentioned in the document,but still it doesn't prompts to enter username and password. Looks like it ignores ldap authentication if running in HTTP mode. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#lightweight-directory-access-protocol-ldap The document also says "NiFi does not perform user authentication over HTTP. Using HTTP all users will be granted all roles." Does this mean I need to generate certificate and run Nifi in HTTPS mode for Ldap authentication? iF not how to do ldap authentication
... View more
Labels:
10-14-2016
05:59 PM
@Mike Krauss Did you try running using, spark-shell --master yarn --conf "spark.executor.extraClassPath=/usr/hdp/current/soark/lib/test.jar" --conf "spark.driver.extraClassPath=//usr/hdp/current/soark/lib/test.jar" . If this works you add these from ambari in custom spark-default
... View more
10-04-2016
03:39 AM
@Keerthi Mantri After making changes, you would need to restart the MR client from Ambari and recylce the oozie server.
... View more
10-04-2016
03:36 AM
@Keerthi Mantri I had the similar issue and it came out to be wrong classpath.
The path in the classpath is wrong, at least on secured cluster: "/etc/hadoop/conf/secure" when it should be "/etc/hadoop/conf. Property in mapred-site.xml for
<property> <name>mapreduce.application.classpath</name>
<value>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/ </value> </property> /etc/hadoop/conf/ should be in classptah not /etc/hadoop/conf/secure
... View more
10-04-2016
03:28 AM
@Keerthi Mantri What is the Ambari version?
... View more
10-01-2016
02:55 AM
Refer the below docs for oozie HA setup. Can you verify , you have correct values for the below property.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_adding_an_oozie_server_component.html
(1)oozie.zookeeper.connection.string=
List of ZooKeeper hosts with ports. For example:
node1:2181,node2:2181,node3:2181 (2) oozie.services.ext= org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService (3) oozie.base.url =http://<loadbalancer.hostname>:11000/oozie (4) oozie.authentication.kerberos.principal = *
(5) In oozie-env, uncomment OOZIE_BASE_URL property and change value to point to the Load Balancer. For example: export OOZIE_BASE_URL="http://<loadbalance.hostname>:11000/oozie" For secured cluster (1) Manually Create new AD Account for HTTP/<loadbalancer_hostname>@<realm> for encryption types.
kadmin.local -q "addprinc -randkey HTTP/<loadbalancer_hostname>@<realm>" (2) Append keytab for AD Account into spnego.service.keytab on all hosts running oozie servers referenced by the loadbalancer.
kadmin.local -q "ktadd -k ambari.server.keytab ambari-server@EXAMPLE.COM" (3) Verify if the keytabs has been appended.
klist -ekt spnego.service.keytab
(4) Move the newly merged spnego.service.keytab on all hosts running oozie servers referenced by the loadbalancer.
After keytabs are updated, Restart Oozie service from Ambari UI.
To Verify if loadbancer is working or not. (1) Stop the oozie server1 and run the below command
oozie admin -oozie http://load-balancer-url:10000/oozie -status
You should get success response. (2) Start the oozie server1 and Stop the oozie server2 and run the below command
oozie admin -oozie http://load-balancer-url:10000/oozie -status
You should get success response.
... View more
06-21-2016
05:32 PM
@Matt Foley I have followed the same steps and when i do distcp between 2 secured HA cluster yarn throws failed to renew token error kind HDFS_DELGATION_TOKEN service: ha-hdfs . i am able to do hadoop fs -ls using the HA on both the cluster. Bothe the cluster has MIT KDC and cross realm setup is done. Bothe the cluster has the same namenode principal. Is there anything else that i need to do? Just an info , when i change the framework from yarn to MR in mapred-client.xml, i am able to do distcp . when i use the yarn framework i get the above error.
... View more
05-17-2016
08:59 PM
I am using the below curl command. curl -iv -u: --negotiate http://ec2-52-33-77-118.us-west-2.compute.amazonaws.com:50070/webhdfs/v1/?op=LISTSTATUS this command works when i login to any node in the cluster and try to curl . Where as when i do the same from mac i get 401 error code --> gss_init_sec_context() failed: unknown mech-code 0 for mech unknown. Do we need to do anything else apart from copying the krb5.conf to /etc and kinit to get the ticket?
... View more
05-17-2016
08:55 PM
@Alex Miller I am able to ssh to namenode. I use the below command to ssh to namenode. ssh -i sumit.pem ec2-use@public-hostname-dns.
... View more
05-17-2016
02:24 PM
Yes @Saurabh Jain klist shows a valid ticket.
... View more
05-17-2016
02:24 PM
1. Copied the krb5.conf from cluster to /etc on mac. 2. Created a keytab file and ftpied the file to mac. 3 Provided 664 permission to the keytab file 4. Ticket is granted to me when i do kinit from mac. Klist does show valid ticket being granted. 5. I run into 401 error when i do curl after doing kinit . When i drill down into the error i see the error code gss_init_sec_context() failed: unknown mech-code 0 for mech unknown 6. I don't see any error being logged in namenode. Looks like the curl command from mac is not hitting the cluster . 7. tcsdump on namenode port(50070) doesn't shows any call being made from mac. Note : - I am on HDP2.3 and have namenode HA. I am able to do curl from one of the node within the cluster using the same keytab file and curl command.
... View more
05-17-2016
02:23 PM
@Alex Miller 1. Copied the krb5.conf from cluster to /etc on mac. 2. Created a keytab file and ftpied the file to mac. 3 Provided 664 permission to the keytab file 4. Ticket is granted to me when i do kinit from mac. Klist does show valid ticket being granted. 5. I run into 401 error when i do curl after doing kinit . When i drill down into the error i see the error
code gss_init_sec_context() failed: unknown mech-code 0 for mech unknown 6. I don't see any error being logged in namenode. Looks like the curl command from mac is not hitting the cluster . 7. tcsdump on namenode port(50070) doesn't shows any call being made from mac.
Note : - I am on HDP2.3 and have namenode HA. I am able to do curl from one of the node within the cluster using the same keytab file and curl command.
... View more
05-16-2016
09:37 PM
1 Kudo
kerberose.pngI am trying to access the cluster that i created in AWS from my mac. I am getting the below error when i do curl.This is a secured Kerberized cluster(MIT KDC)
... View more