Member since
04-05-2016
14
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3264 | 10-04-2016 03:28 AM | |
1123 | 10-01-2016 02:55 AM | |
2255 | 06-21-2016 05:32 PM |
12-22-2016
03:07 AM
Thanks Eyad, That's what i was looking for. As per the document it does says it needs to run in HTTPS mode for ldap authentication.
... View more
12-22-2016
02:55 AM
1 Kudo
I have performed all the steps mentioned in the document,but still it doesn't prompts to enter username and password. Looks like it ignores ldap authentication if running in HTTP mode. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#lightweight-directory-access-protocol-ldap The document also says "NiFi does not perform user authentication over HTTP. Using HTTP all users will be granted all roles." Does this mean I need to generate certificate and run Nifi in HTTPS mode for Ldap authentication? iF not how to do ldap authentication
... View more
Labels:
- Labels:
-
Apache NiFi
10-04-2016
03:39 AM
@Keerthi Mantri After making changes, you would need to restart the MR client from Ambari and recylce the oozie server.
... View more
10-04-2016
03:36 AM
@Keerthi Mantri I had the similar issue and it came out to be wrong classpath.
The path in the classpath is wrong, at least on secured cluster: "/etc/hadoop/conf/secure" when it should be "/etc/hadoop/conf. Property in mapred-site.xml for
<property> <name>mapreduce.application.classpath</name>
<value>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/ </value> </property> /etc/hadoop/conf/ should be in classptah not /etc/hadoop/conf/secure
... View more
10-04-2016
03:28 AM
@Keerthi Mantri What is the Ambari version?
... View more
10-01-2016
02:55 AM
Refer the below docs for oozie HA setup. Can you verify , you have correct values for the below property.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_adding_an_oozie_server_component.html
(1)oozie.zookeeper.connection.string=
List of ZooKeeper hosts with ports. For example:
node1:2181,node2:2181,node3:2181 (2) oozie.services.ext= org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService (3) oozie.base.url =http://<loadbalancer.hostname>:11000/oozie (4) oozie.authentication.kerberos.principal = *
(5) In oozie-env, uncomment OOZIE_BASE_URL property and change value to point to the Load Balancer. For example: export OOZIE_BASE_URL="http://<loadbalance.hostname>:11000/oozie" For secured cluster (1) Manually Create new AD Account for HTTP/<loadbalancer_hostname>@<realm> for encryption types.
kadmin.local -q "addprinc -randkey HTTP/<loadbalancer_hostname>@<realm>" (2) Append keytab for AD Account into spnego.service.keytab on all hosts running oozie servers referenced by the loadbalancer.
kadmin.local -q "ktadd -k ambari.server.keytab ambari-server@EXAMPLE.COM" (3) Verify if the keytabs has been appended.
klist -ekt spnego.service.keytab
(4) Move the newly merged spnego.service.keytab on all hosts running oozie servers referenced by the loadbalancer.
After keytabs are updated, Restart Oozie service from Ambari UI.
To Verify if loadbancer is working or not. (1) Stop the oozie server1 and run the below command
oozie admin -oozie http://load-balancer-url:10000/oozie -status
You should get success response. (2) Start the oozie server1 and Stop the oozie server2 and run the below command
oozie admin -oozie http://load-balancer-url:10000/oozie -status
You should get success response.
... View more
06-21-2016
05:32 PM
@Matt Foley I have followed the same steps and when i do distcp between 2 secured HA cluster yarn throws failed to renew token error kind HDFS_DELGATION_TOKEN service: ha-hdfs . i am able to do hadoop fs -ls using the HA on both the cluster. Bothe the cluster has MIT KDC and cross realm setup is done. Bothe the cluster has the same namenode principal. Is there anything else that i need to do? Just an info , when i change the framework from yarn to MR in mapred-client.xml, i am able to do distcp . when i use the yarn framework i get the above error.
... View more