Member since
10-09-2015
86
Posts
179
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25207 | 12-29-2016 05:19 PM | |
1861 | 12-17-2016 06:05 PM | |
14782 | 08-24-2016 03:08 PM | |
2165 | 07-14-2016 02:35 AM | |
3996 | 07-08-2016 04:29 PM |
07-02-2016
01:19 PM
9 Kudos
Introduction
- Here is a small demo how NiFi can help you monitor and alert on YARN Application failure.
- Here you can view the screen recording that Demonstrates how it works! Prerequisite
- Make
sure you have your HDP cluster/Sandbox up and running.
- NiFi_0.6.1//HDF_1.2 is available up and running. Steps:
1.
Assuming you have NiFi UI Available, lets drop
GetHTTP processor to pull data from YARN REST API:
Configure Processor with URL given as below, which pulls all Applications in Killed and Failed state. node1 is my Resource Manager:
http://node1:8088/ws/v1/cluster/apps?states=KILLED,FAILED
Lets
schedule the processor to run only every 10sec so that you don’t query too
often.
2.
As the Rest call outputs the application details in Json format, lets use a
SplitJson processor to separate
individualapplication details.
Provide “JsonPath Expression” value as “$.apps.app” in the configuration.
3.
Connect
GetHTTP to SplitJson for success relation and auto
terminate rest.
4. Lets add
EvaluateJsonPath
processor to extract required fields and add them to flow-file
attribute: Configure it as below:
5. Connect
SplitJson
to EvaluateJsonPath for success
relation.
6. Create and start two controller services:
DistributedMapCacheClientService, DistributedMapCacheServer so that we
keep track of all the applications and don’t sent out duplicate alerts for same
application.
7. Add a
PutDistributedMapCache
processor to update the cache with latest apps that fails/killed. Configure
it as below adding Distributed cache service.
8.
Lets auto terminate Failure relationship and connect success relationship to
PutEmail processorwhich will sent out email for any new failed/killed application.
9.
Make sure you have formatted the email body and subject to have all information
about the failed job:
10.
Auto terminate success and failure relationship for
PutEmail processor. Once you start the Flow, you will get alerts
for each Killed/Failed Yarn application. My Alert would look like below:
Note: Now you can configure your GetHTTP Processor to query YARN to find long running applications Thanks, Jobin George
... View more
Labels:
07-01-2016
07:09 AM
Hi @Saisubramaniam Gopalakrishnan, I haven't Tried it, but you can add an input port to the root nifi canvas and try communicating from spark(nifi site-to-site client), and then push it to other processors as required. Thanks, Jobin
... View more
06-28-2016
02:45 PM
1 Kudo
Hi @AnjiReddy Anumolu, Easy way to get hold of your file is from provenance: - on NiFi UI, click provenance button on top right corner - find the event for your file, click on "view details" button - you can view or download the file on the "contents" tab: if you need to see the file contents on your server, search in the content_repository for file named as "identifier" from output claim [ie 1467063966583-11 as in screenshot above] @ "offset" [ie 463775 as in screenshot above] . Hope this helps! Thanks!
... View more
06-28-2016
05:51 AM
1 Kudo
Hi Kishore, If its a cluster, You will be creating your flows in NCM[Nifi Cluster Manager] UI, which runs on all the nodes in the cluster. Since you have only 2 nodes in the cluster(may be only one worker node and NCM), you may not have much to load balance there. Still you can stimulate a load balancer with nifi site-to- site protocol. you can get more info on site-to-site protocol load balancing here: https://community.hortonworks.com/questions/509/site-to-site-protocol-load-balancing.html Thanks!
... View more
06-27-2016
06:27 PM
1 Kudo
Hi @kishore sanchina, You can save config files in any directory on nifi node, and provide that path in the processor configuration.
... View more
06-25-2016
06:30 PM
4 Kudos
Hi Kishore, You could drop below processors as per your requirement on to your NiFi UI and configure them properly to pull the data from HDFS. and to configure them, add hdfs, core property files location saved on to the local file system on nifi node(s), can use Ambari to download these from HDP cluster. pls find sample conf screenshot below: Thanks!
... View more
06-25-2016
04:31 PM
@Kuldeep Kulkarni, Thanks. This is good info, just to confirm: - you mentioned clusterconfigmapping twice, I assume "serviceconfigmapping" is the one missing in that list as in the query its mentioned. - With these entries in DB, I am deleting only one config version made by "kuldeep", not every entry he made correct? Thanks,
... View more
06-24-2016
07:55 PM
3 Kudos
Hi, Few questions on Ambari: 1) Can we delete a service configuration version in Ambari?
--Say i made a horrible mistake in my configuration and saved it, reverted back,
but i don't want any one in my admin team to go back and apply that version some time later. to avoid that i need to remove it from config history. 2) How many versions of configs will Ambari save in history? -- can that be configured? 3) Does Ambari have any views/method to alert for Job/application Failures? Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
06-23-2016
09:45 PM
1 Kudo
Hi, Can you check if you have these properties set properly on both nodes? nifi.remote.input.socket.host={host} nifi.remote.input.socket.port={port} nifi.remote.input.secure={false/true} Thanks
... View more
05-18-2016
10:51 PM
10 Kudos
Prerequisite 1. You have HDF-1.2
installed on your server 2. Make sure KDC is
installed on your server and is started, will try to describe steps briefly in
the tutorial, below is the link to detailed steps from HDP documentation: HDP-Documenation-for-Kerberos Installing and Configuring KDC: 1. Lets install a new version of KDC server: # yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation 2. Using a text editor, open the KDC server configuration file, located by default
here: # vi /etc/krb5.conf
[realms]
EXAMPLE.COM = {
kdc = node1
admin_server = node1 } add your host name, mine is node1. 3. Use the utility kdb5_util to create the Kerberos database, when asked lets put password ‘hadoop’: # kdb5_util create –s 4. Lets Start the KDC server and the KDC admin server, set them to auto-start on
boot # /etc/rc.d/init.d/krb5kdc start
# /etc/rc.d/init.d/kadmin start
# chkconfig krb5kdc on
# chkconfig kadmin on 5.
Lets add a service principal for a server and export the keytab from the KDC: # kadmin.local
# addprinc -randkey nifi/HDF
# ktadd -k /opt/nifi-HDF.keytab nifi/HDF
# q 6. Make sure “/opt/nifi-HDF.keytab” is generated and is available. 7. Lets make some login Identities and set password as ‘hadoop’ which we will be using to login in the UI: # kadmin.local -q "addprinc jobin/node1"
# kinit jobin/node1@EXAMPLE.COM
# kadmin.local -q "addprinc george/node1"
# kinit george/node1@EXAMPLE.COM
Configuring NiFi: 1. NiFi will only
respond to Kerberos SPNEGO negotiation over an HTTPS connection, as unsecured
requests are never authenticated. For that you will need to enable 2-way SSL. I already did created Certification
Authorities and client certificates at www.tinycert.org If you are too lazy to create them, try with mine 🙂 [Attached as certificates.zip] - Use cert-browser.pfx to load into browser to be a NiFi administrator 'DEMO' - Upload other two certificates to your server under '/root/scripts/' and execute below commands, while executing last command enter 'hadoop' as password and 'yes' when asked if it can be trusted. # cd /root/scripts/
# mv cert.pfx cert.p12
# openssl x509 -outform der -in cacert.pem -out cacert.der
# keytool -import -keystore cacert.jks -file cacert.der certificates.zip 2. My keystore is saved
as ‘/root/scripts/cert.p12’ and a truststore
is saved as ‘/root/scripts/cacert.jks’.
and password is set as hadoop. 3. Below are
the configuration updates you have to do in nifi.properties
file in node1: # vi /opt/nifi-1.1.0.0-10/conf/nifi.properties 4. Once
opened in editor update below properties to given values [updating https port
and certificate details]: nifi.web.http.host=
nifi.web.http.port=
nifi.web.https.host=node1
nifi.web.https.port=9090
nifi.security.keystore=/root/scripts/cert.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=hadoop
nifi.security.keyPasswd=hadoop
nifi.security.truststore=/root/scripts/cacert.jks
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=hadoop Now Lets put
Kerberos details in nifi.properties in “kerberos”
section: # kerberos #
nifi.kerberos.krb5.file=/etc/krb5.conf
nifi.kerberos.service.principal=nifi/HDF@EXAMPLE.COM
nifi.kerberos.keytab.location=/opt/nifi-HDF.keytab
nifi.kerberos.authentication.expiration=12 hours Also make
sure you update two properties as below: nifi.security.user.login.identity.provider=kerberos-provider
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml 5. Now configure
the authorized users in ‘authorized-users.xml’
file, configuration of user is based on certificate. Configure it exactly as below you have in certificate I have attached in step1. vi /opt/nifi-1.1.0.0-10/conf/authorized-users.xml <user dn="CN=Demo, OU=Demo,O=Hortonworks, L=San Jose, ST=California, C=US">
<role name="ROLE_ADMIN"/>
</user> 6. Above configuration is to login as NiFi Administrator, every
other users can be pulled from Kerberos after this administrator assigns roles
on request. 7. Configure ./conf/login-identity-providers.xml as below
with reference to Kerberos Configured [Make sure you have removed xml comments
tag]. <provider>
<identifier>kerberos-provider</identifier>
<class>org.apache.nifi.kerberos.KerberosProvider</class>
<property name="Default Realm">EXAMPLE.COM</property>
<property name="Kerberos Config File">/etc/krb5.conf</property>
<property name="Authentication Expiration">12 hours</property>
</provider> 8. Once configured, restart NiFi server. # /opt/HDF-1.2.0.0/nifi/bin/nifi.sh restart 9. Now say open
‘Chrome’ browser and load client
certificate [cert-browser.pfx] associated with ADMIN user and login to secure https url of NiFi
running on node1: https://node1:9090/nifi 10. When
asked, confirm for security exception and proceed. Now you are securely logged
in as Demo user with admin privileges. You can now grant access to any user
requesting access. 11. Open another browser
say ‘Safari’ to establish another session https://node1:9090/nifi It will
popup below screen for login, enter any of the credentials for identities we
just created in step 7, “Configuring KDC” Username: jobin/node1 Password: hadoop
Username: george/node1 Password: hadoop 12. Enter
the password as hadoop and hit login, enter justification and it will show up
below screen that request is pending with Admin who already have access using
certificates. 13. Now go
back to chrome browser where ‘Demo’ user is NiFi Administrator and assign role
to jobin. 14. Now you
can see that the user is active: 15. Go back
to the old session, as tom in safari, refresh the browser and you will be
logged in as ‘jobin’ with privileges assigned by NiFi administrator. You can
test if for other user ‘george’ as well. Now you have Authenticated two users jobin and george to access NiFi User Interface. Hope this will be useful !! Thanks, Jobin George
... View more
Labels: