Member since
10-09-2015
86
Posts
179
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25088 | 12-29-2016 05:19 PM | |
1834 | 12-17-2016 06:05 PM | |
14688 | 08-24-2016 03:08 PM | |
2142 | 07-14-2016 02:35 AM | |
3980 | 07-08-2016 04:29 PM |
02-15-2017
06:50 AM
7 Kudos
Introduction Recently I was asked how to monitor and alert flowfile count in a connection queue when it exceeds a predefined threshold, I am trying to capture the steps to implement the same. Prerequisites 1) To test this, Make sure HDF-2.x version of NiFi is up an running. 2) You Already have a connection with data queued in it(say more than 20 flowfiles). Else you can create one like below: 3) Make a note of the Connection name and uuid to be monitored: Creating a Flow to Monitor Connection Queue Count. 1) Drop a GenerateFlowFile processor to the canvas and make "Run Schedule" 60 sec so we dont execute the flow to often. 2) Drop an UpdateAttribute processor, connect GenerateFlowFile's success relation to it and add below properties to it( the connection uuid noted above, threshold say 20, your NiFi host and port): CONNECTION_UUID : dcbee9dd-0159-1000-45a7-8306c28f2786
COUNT : 20
NIFI_HOST : localhost
NIFI_PORT : 8080 3) Drop a InvokeHTTP processor to the canvas, connect UpdateAttribute's success relation to it, auto terminate all other relations and update its 2 properties as below: HTTP Method : GET
Remote URL : http://${NIFI_HOST}:${NIFI_PORT}/nifi-api/connections/${CONNECTION_UUID} 4) Drop an EvaluateJsonPath processor to extract values from json with below properties, connect Response relation of InvokeHTTP to it, and auto terminate its failure and unmatched relations. QUEUE_NAME : $.status.name
QUEUE_SIZE : $.status.aggregateSnapshot.flowFilesQueued 5) Drop a RouteOnAttribute processor to the canvas with below configs, connect EvaluateJsonPath's matched relation to it and auto terminate its unmatched relation. Queue_Size_Exceeded : ${QUEUE_SIZE:gt(${COUNT})} 6) Lastly add a PutEmail processor, connect RouteOnAttribute's matched relation to it and auto terminate all its relations. below are my properties set, you have to update it with your SMTP details: SMTP Hostname : west.xxxx.yourServer.net
SMTP Port : 587
SMTP Username : jgeorge@hortonworks.com
SMTP Password : Its_myPassw0rd_updateY0urs
SMTP TLS : true
From : jgeorge@hortonworks.com
To : jgeorge@hortonworks.com
Subject : Queue Size Exceeded Threshold and message content should look something like below to grab all the values: Message : Queue Size Exceeded Threshold for ${CONNECTION_UUID} Connection Name : ${QUEUE_NAME}
Threshold Set : ${COUNT}
Current FlowFile Count : ${QUEUE_SIZE} 7) Now the flow is completed and li should look similar to below: Staring the flow and testing it 1) Lets make sure at least 21 flow files are pending in the connection named 'DataToFileSystem' which was created in the Prerequisites 2) Now lets start the flow and you should receive mail alert from NiFi stating the count exceeded Threshold set which is 20 in our case. My sample alert looks like below: 3) This concludes the tutorial for monitoring your connection queue count with NiFi. 4) Too lazy to create the flow???.. Download my template here References NiFi REST API NiFI Expression Language Thanks, Jobin George
... View more
Labels:
02-13-2017
07:25 AM
4 Kudos
Hi @Avijeet Dash, See if this helps: One easy way of loading key value pairs in NiFi is using NiFi Custom Properties registry. This is a comma-separated list of file location paths for one or more custom property files. For example I can load a file named nifi_registry with key value pairs separated using '=' (say i have OS=MAC in it), and can reference ${OS} to substitute its value MAC using NiFi expression language(after restarting nifi). Its updated in nifi.property file using nifi.variable.registry.properties You can read about it here: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#custom_properties Thanks, Jobin
... View more
02-04-2017
04:52 PM
Hi @Saurabh Verma, - Looks like 'NiFi Certificate Authority' is still using the Ambari provided java rather than what you provide in custom configs as well in "Template for nifi-env.sh" [but if you try just starting NiFi individually rather than trying whole service with above provided update NiFi should come up, but not NiFi-CA]. - Do you think Updating java version for Ambari is an option? (hoping it doesn't break anything else). if so, please follow the link below and choose option 3 [Custom JDK] in step 3 and enter your new java home location. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_ambari_reference_guide/content/ch_changing_the_jdk_version_on_an_existing_cluster.html Let me know if this helps.
... View more
02-03-2017
11:18 PM
2 Kudos
Hi @Saurabh Verma If I understand you correctly, below would help: If you have Java version conflict, and already have latest version installed. Please try updating java property with full path of your java8 binary under services->NiFi->Advanced nifi-bootstrap-env in Ambari as below: Update to like below with your full java 8 path: Once updated, restart the service and see if you still have issues. if not using Ambari, update below section in./conf/bootstrap.conf file. # Java command to use when running NiFi java=java
... View more
01-31-2017
06:37 PM
5 Kudos
Introduction When the NiFi bootstrap starts or stops NiFi, or detects that it has died unexpectedly, it is able to notify configured recipients. Currently, the only mechanism supplied is to send an e-mail notification. Prerequisite 1) Assuming you already have HDF-2.x Installed, Ambari and NiFi is up and running. If not, I would recommend "Ease of Deployment" section of this article to install it [You can also follow this article for Automated installation of HDF cluster or refer hortonworks.com for detailed steps] Configuring NiFi property files in Ambari 1) To setup email notifications we have to
update only two configurations file bootstrap.conf
and bootstrap-notification-services.xml
2) We have to update appropriate properties in
Ambari to configure it, first we have to edit Template for bootstrap.conf to update below properties. Uncomment
below lines in the properties file: nifi.start.notification.services=email-notification
nifi.stop.notification.services=email-notification
nifi.dead.notification.services=email-notification 3) Edit Template for bootstrap-notification-services.xml and make sure your SMTP
settings are updated, and are uncommented. Sample configuration is given below: <service>
<id>email-notification</id>
<class>org.apache.nifi.bootstrap.notification.email.EmailNotificationService</class>
<property name="SMTP Hostname">west.xxxx.server.net</property>
<property name="SMTP Port">587</property>
<property name="SMTP Username">jgeorge@hortonworks.com</property>
<property name="SMTP Password">Th1sisn0tmypassw0rd</property>
<property name="SMTP TLS">true</property>
<property name="From">jgeorge@hortonworks</property>
<property name="To">jgeorge@hortonworks.com</property>
</service> 4) Save the Config changes in Ambari after
uncommenting the, <service> property, confirm when asked and restart
service. Testing NiFi Notification Services 1) Once restarted, you will see both stopped and started alerts in your inbox with details. Stopped Email Alert: Started Email Alert: 2) Try out stopping and killing NiFi
process [make sure you don’t kill bootstrap process which monitors NiFi which
in turn restarts NiFi process.] Died Email Alert: References NiFi notification_services Thanks, Jobin George
... View more
01-25-2017
09:48 PM
4 Kudos
Introduction ControllerStatusReportingTask Logs the 5-minute stats that are shown in the NiFi Summary Page for Processors and Connections. By Default, when configured and started it goes directly to nifi-app.log. These can be configured in the NiFi logging configuration to log to different files, here I try to describe steps to log it to a separate log file with Ambari. Prerequisite 1) Assuming you already have HDF-2.x Installed on your VM/Server, Ambari, NiFi is up and running. If not, I would recommend "Ease of Deployment" section of this article to install it [You can also follow this article for Automated installation of HDF cluster or refer hortonworks.com for detailed steps] Configuring "Advanced nifi-node-logback-env" section in Ambari 1. Navigate to your
browser window and type in URL for Ambari as below and login to Ambari UI [UI is
accessible at port 8080] http://<YOUR_IP>:8080/ 2. Once logged
in click on NiFi service option on left side, click on “Configs” and expand “Advanced
nifi-node-logback-env” section in configs and edit “logback.xml” template
and add below lines just before last line </configuration> <appender name="5MINUTES_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/5minutesStatistics.log</file>
<rollingPolicy>
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/5minutesStatistics_%d.log</fileNamePattern>
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<logger name="org.apache.nifi.controller.ControllerStatusReportingTask" level="INFO" additivity="false">
<appender-ref ref="5MINUTES_FILE" />
</logger> 3. Once above lines for
creating a new file named 5minutesStatistics.log
to save all the 5Minutes statistics details the Reporting task creates,
click save and enter details of what configuration is changed. 4. Once saved, Ambari
will suggest restart of NiFi service, click restart (It might take up to 2 minutes
to complete restart and NiFi UI to come online) Configuring ControllerStatusReportingTask in NiFi 1. Once NiFi is restarted, navigate
to NiFi User Interface on any node and click on the ‘Controller Settings’ tab
on right top corner. A window like below will popup. Select “Reporting Tasks”
tab Click on the ‘+’ on right corner, when a selection is requested click on ‘ControllerStatusReportingTask’ 2.
Once selected click add, the Reporting task will be in stopped state, Now you may click start: 3. Once started, the 5minutes statistics would have started logging onto 5minutesStatistics.log Verifying Log created in the Server 1.
ControllerStatusReportingTask will log the 5-minute stats that are shown in the NiFi Summary Page for
Processors and Connections to 5minutesStatistics.log. Let’s verify the same: # tail -f /var/log/nifi/5minutesStatistics.log 2.
ControllerStatusReportingTask started logging the 5-min processor status to the
log we specified and it will be rolled as per the configuration we provided. 3.
Once verified, stop the Controller service. Hope this Helps.. Thanks, Jobin George
... View more
01-25-2017
06:49 PM
@Yumin Dong Which version of HDF/NiFi are you using? if the latest one, I hope you downloaded the latest version of dependencies. Let me know. Jobin
... View more
01-25-2017
04:28 AM
6 Kudos
Introduction By integrating with LDAP, username/password authentication can be enabled in NiFi. This tutorial provides step by step instructions to setup NiFi - LDAP Authentication via Ambari (Using Knox Demo Ldap Server) Prerequisite 1) Assuming you already have HDF-2.x Installed on your VM/Server, Ambari, NiFi is up and running with out security. If not, I would recommend "Ease of Deployment" section of this article to install it [You can also follow this article for Automated installation of HDF cluster or refer hortonworks.com for detailed steps] Setting up Demo LDAP Server 1) As HDF and HDP cannot co-exist on a single node, lets download knox zip file from apache for this tutorial for easily setting up an ldap server. Execute below steps for the same after establishing ssh connectivity to the VM/Server (name of my host is node1): # ssh node1
# mkdir /opt/knox/
# cd /opt/knox/
# wget http://mirror.cogentco.com/pub/apache/knox/0.11.0/knox-0.11.0.zip
# unzip knox-0.11.0.zip
# /opt/knox/knox-0.11.0/bin/ldap.sh start 2) Make sure ldap server is started and running on port 33389 on your server # lsof -i:33389
OR
# netstat -anp | grep 33389 3) Below credentials are part of knox demo ldap we just started. We can use any of the users to login to NiFi after integration. tom/tom-password
admin/admin-password
sam/sam-password
guest/guest-password Configuring NiFi for LDAP Authentication via Ambari 1. Login to Ambari UI in the server URL, Click on the NiFi service à and then click on Config tab, expand “Advanced nifi-ambari-ssl-config ” section, update configuration as below: Initial Admin Identity : uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
Enable SSL? : {click check box}
Key password : hadoop
Keystore password : hadoop
Keystore type : JKS 2. Enter Below as the Truststore and DN configurations : Truststore password : hadoop
Truststore type : JKS
NiFi CA DN prefix : CN=
NiFi CA DN suffix : , OU=NIFI 3. Provide the configuration as below for Node Identities and keystore details: NiFi CA Force Regenerate? : {click check box}
NiFi CA Token : hadoop
Node Identities :
<property name="Node Identity 1">CN=node1, OU=NIFI</property>
Tip: Say If I am having a 3 node cluster with node1, node2 and node3 as part f it, above configuration looks like below: <property name="Node Identity 1">CN=node1, OU=NIFI</property>
<property name="Node Identity 2">CN=node2, OU=NIFI</property>
<property name="Node Identity 3">CN=node3, OU=NIFI</property> 4. In the Ambari UI, choose NiFi service and select config tab. We have to update two set of properties, in the “Advanced nifi-properties ” section update nifi.security.user.login.identity.provider as ldap-provider nifi.security.user.login.identity.provider=ldap-provider
5. Now in the “Advanced nifi-login-identity-providers-env ” section, update the “Template for loginidentity- providers.xml “ property with below configurations just above </loginIdentityProviders> <provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">uid=admin,ou=people,dc=hadoop,dc=apache,dc=org</property>
<property name="Manager Password">admin-password</property>
<property name="TLS - Keystore">/usr/hdf/current/nifi/conf/keystore.jks</property>
<property name="TLS - Keystore Password">hadoop</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">/usr/hdf/current/nifi/conf/truststore.jks</property>
<property name="TLS - Truststore Password">hadoop</property>
<property name="TLS - Truststore Type">JKS</property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://node1:33389</property>
<property name="User Search Base">ou=people,dc=hadoop,dc=apache,dc=org</property>
<property name="User Search Filter">uid={0}</property>
<property name="Authentication Expiration">12 hours</property>
</provider> 6. Once All properties are updated, click save and when prompted, click restart. 7. Once restarted, you can try connecting to nifi URL, you should see the below screen, enter credentials as below for admin user the configured Initial Admin Identity and click LOG IN https://node1:9091/nifi/ --> in my case host is node1
admin/admin-password 8. You should be able to login as Admin user for NiFi and should see the below UI: Adding a User and Providing Access to UI 1) Let us go ahead and create a user jobin in ldap so that we can give access for him to NiFi UI. 2) Edit the users.ldif file with below entry in the knox/conf directory and restart the server: # vi /opt/knox/knox-0.11.0/conf/users.ldif Add below entry to the end of the file # entry for sample user jobin
dn: uid=jobin,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: jobin
sn: jobin
uid: jobin
userPassword:jobin-password 3) Once added lets stop and start the ldap server: # /opt/knox/knox-0.11.0/bin/ldap.sh stop
# /opt/knox/knox-0.11.0/bin/ldap.sh start 4) While logged in as admin on the nifi UI, Lets us add a user jobin with below id by clicking '+ user' button on top right 'users' menu like below: uid=jobin,ou=people,dc=hadoop,dc=apache,dc=org Enter the above value and click OK. 5. Now close the users window and click to open 'policies' window on the management menu on the top right corner below 'users' menu. click "+user" button on right top corner, on the pop up, enter jobin and select the user and click OK. 6. Once policy added, it will look like below: 7. Now you may log out as admin and provide below credentials to login as 'jobin' user, jobin/jobin-password 8. you should be able to login and view the UI, but wont have privilege to add anything to the canvas. (as jobin is given only read access) you may login back as admin and give required access.
This completes the tutorial, You have successfully: - Installed and Configured HDF 2.0 on your server. - Downloaded and started knox Demo Ldap Server - Configured NiFi to use Knox Ldap to Authenticate users where NiFi Initial Admin is from Ldap. - Restarted NiFi and verified access for admin user in NiFi UI. - Created a new user jobin in ldap, added him to NiFi user list and gave read access. - Verified access for user jobin Thanks, Jobin George
... View more
01-25-2017
12:12 AM
4 Kudos
Introduction
Using NiFi, data can be exposed in such a way that a receiver can pull from it by adding an Output Port to the root process group. For Storm, we will use this same mechanism - we will use the Site-to-Site protocol to pull data from NiFi's Output Ports. In this tutorial we learn to capture NiFi app log from the Sandbox and parse it using Java regex and ingest it to Phoenix via Storm or Directly using NiFi PutSql Processor.
Prerequisites 1) Assuming you already have latest version of NiFi-1.x/HDF-2.x downloaded as zip file (HDF and HDP cannot be managed by Ambari on same nodes as of now) on to your HW Sandbox Version 2.5, else execute below after ssh connectivity to sandbox is established: # cd /opt/
# wget http://public-repo-1.hortonworks.com.s3.amazonaws.com/HDF/centos6/2.x/updates/2.0.1.0/HDF-2.0.1.0-centos6-tars-tarball.tar.gz # tar -xvf HDF-2.0.1.0-12.tar.gz 2) Storm, Zeppelin are Installed on your VM and started. 3) Hbase is Installed with phoeix Query Server. 4) Make sure Maven is installed, if not already, execute below steps: # curl -o /etc/yum.repos.d/epel-apache-maven.repo https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
# yum -y install apache-maven
# mvn -version
Configuring and Creating Table in Hbase via Phoenix 1) Make sure Hbase components as well as phoenix query server is started. 2) Make sure Hbase is up and running and out of maintenance mode, below properties are set(if not set it and restart the services): - Enable Phoenix --> Enabled
- Enable Authorization --> Off 3) Create Phoenix Table after connecting to phoenix shell (or via Zeppelin): # /usr/hdp/current/phoenix-client/bin/sqlline.py sandbox.hortonworks.com:2181:/hbase-unsecure 4) Execute below in the Phoenix shell to create tables in Hbase: CREATE TABLE NIFI_LOG( UUID VARCHAR NOT NULL, EVENT_DATE DATE, BULLETIN_LEVEL VARCHAR, EVENT_TYPE VARCHAR, CONTENT VARCHAR CONSTRAINT pk PRIMARY KEY(UUID));
CREATE TABLE NIFI_DIRECT( UUID VARCHAR NOT NULL, EVENT_DATE VARCHAR, BULLETIN_LEVEL VARCHAR, EVENT_TYPE VARCHAR, CONTENT VARCHAR CONSTRAINT pk PRIMARY KEY(UUID));
Configuring and Starting NiFi 1) Open nifi.properties for updating configurations:
# vi /opt/HDF-2.0.1.0/nifi/conf/nifi.properties 2) Change NIFI http port to run on 9090 as default 8080 will conflict with Ambari web UI # web properties #
nifi.web.http.port=9090 3) Configure NiFi instance to run site-to site by changing below configuration : add a port say 8055 and set "nifi.remote.input.secure" as "false" # Site to Site properties #
nifi.remote.input.socket.port=8055
nifi.remote.input.secure=false 4) Now Start [Restart if already running for configuration change to take effect] NiFi on your Sandbox. # /opt/HDF-2.0.1.0/nifi/bin/nifi.sh start 5) Make sure NiFi is up and running by connecting to its Web UI from your browser: http://your-vm-ip:9090/nifi/
Building a Flow in NiFi to fetch and parse nifi-app.log 1) Let us build a small flow on NiFi canvas to read app log generated by NiFi itself to feed to Storm: 2) Drop a "TailFile" Processor to canvas to read lines added to "/opt/HDF-2.0.1.0/nifi/logs/nifi-user.log". Auto Terminate relationship Failure. 3) Drop a "SplitText" Processor to canvas to split the log file into separate lines. Auto terminate Original and Failure Relationship for now. Connect TailFile processor to SplitText Processor for Success Relationship. 4) Drop a "ExtractText" Processor to canvas to extract portions of the log content to attributes as below. Connect SplitText processor to ExtractText Processor for splits relationship. - BULLETIN_LEVEL : ([A-Z]{4,5})
- CONTENT : (^.*)
- EVENT_DATE : ([^,]*)
- EVENT_TYPE : (?<=\[)(.*?)(?=\]) 5) Drop an OutputPort to the canvas and Name it "OUT", Once added, connect "ExtractText" to the port for matched relationship. The Flow would look similar as below: 6) Start the flow on NiFi and notice data is stuck in the connection before the output port "OUT"
Building Storm application jar with maven 1) To begin with, lets clone the artifacts, feel free to inspect the dependencies and NiFiStormStreaming.java # cd /opt/
# git clone https://github.com/jobinthompu/NiFi-Storm-Integration.git 2) Feel free the inspect pom.xml to verify the dependencies. # cd /opt/NiFi-Storm-Integration
# vi pom.xml 3) Lets rebuild Storm jar with artifacts (this might take several minutes). # mvn package 4) Once the build is SUCCESSFUL, make sure the NiFiStormTopology-Uber.jar is generated in the target folder: # ls -l /opt/NiFi-Storm-Integration/target/NiFiStormTopology-Uber.jar 5) Now let us go ahead and submit the topology in storm (make sure the NiFi flow created above is running before submitting topology). # cd /opt/NiFi-Storm-Integration
# storm jar target/NiFiStormTopology-Uber.jar NiFi.NiFiStormStreaming & 6) Lets Go ahead and verify the topology is submitted on the Storm View in Ambari as well as Storm UI: Ambari UI: http://your-vm-ip:8080 Storm UI: http://your-vm-ip:8744/index.html 7) Lets Go back to the NiFi Web UI, if everything worked fine, the data which was pending on the port OUT will be gone as it was consumed by Storm. 😎 Now Lets Connect to Phoenix and check out the data populated in tables, you can either use Phoenix sqlline command line or Zeppelin a) via phoenix sqlline # /usr/hdp/current/phoenix-client/bin/sqlline.py localhost:2181:/hbase-unsecure SELECT EVENT_DATE,EVENT_TYPE,BULLETIN_LEVEL FROM NIFI_DIRECT WHERE BULLETIN_LEVEL='ERROR' ORDER BY EVENT_DATE; b) via Zeppelin for better visualization Zeppelin UI: http://your-vm-ip:9995/ 9) No you can Change the code as needed, re-built the jar and re-submit the topologies. Extending NiFi Flow to ingest data directly to Phoenix using PutSql processor 1) Lets go ahead and kill the storm topology from command-line (or from Ambari Storm-View or Storm UI) # storm kill NiFi-Storm-Phoenix 2) Log back to NiFi UI currently running the flow, and stop the entire flow. 3) Drop a RouteOnAttribute processor to canvas for Matched relation from ExtractText processor and configure it with below property and auto terminate unmatched relation. DEBUG : ${BULLETIN_LEVEL:equals('DEBUG')}
ERROR : ${BULLETIN_LEVEL:equals('ERROR')}
INFO : ${BULLETIN_LEVEL:equals('INFO')}
WARN : ${BULLETIN_LEVEL:equals('WARN')} 4) Drop an AttributesToJSON processor to canvas with below configuration and connect RouteOnAttribute's DEBUG,ERROR,INFO,DEBUG relations to it. Attributes List : uuid,EVENT_DATE,BULLETIN_LEVEL,EVENT_TYPE,CONTENT
Destination : flowfile-content 5) Create and enable DBCPConnectionPool with name "Phoenix-Storm" with below configuration: Database Connection URL : jdbc:phoenix:sandbox.hortonworks.com:2181:/hbase-unsecure
Database Driver Class Name : org.apache.phoenix.jdbc.PhoenixDriver
Database Driver Location(s) : /usr/hdp/current/phoenix-client/phoenix-client.jar
6) Drop a ConvertJSONToSQL to canvas with below configuration, connect AttributesToJSON's success relation to it, auto terminate Failure relation for now after connecting to Phoenix-Storm DB Controller service. 7) Drop a ReplaceText processor canvas to update INSERT statements to UPSERT for Phoenix with below configuration, connect sql relation of ConvertJSONToSQL auto terminate original and Failure relation. 😎 Finally add a PutSQL processor with below configurations and connect it to ReplaceText's success relation and auto terminate all of its relations. 9) The final flow including both ingestion via Storm and direct to phoenix using PutSql is complete, it should look similar to below: 10) Now go ahead and start the flow to ingest data to both Tables via storm and directly from NiFi. 11) Login back to Zeppelin to see if data is populated in the NIFI_DIRECT table. %jdbc(phoenix)
SELECT EVENT_DATE,EVENT_TYPE,BULLETIN_LEVEL FROM NIFI_DIRECT WHERE BULLETIN_LEVEL='INFO' ORDER BY EVENT_DATE - Too Lazy to create flow??? download my flow template here. This completes the tutorial, You have successfully: - Installed and Configured HDF 2.0 on your HDP-2.5 Sandbox. - Created a Data flow to pull logs and then to Parse it and make it available on a Site-to-site enabled NiFi port. - Created a Storm topology to consume data from NiFi via Site-to-Site and Ingest it to Hbase via Phoenix. - Directly Ingested Data to Phoenix with PutSQL Processor in NiFi with out using Storm - Viewed the Ingested data from Phoenix command line and Zeppelin References: bbende's - nifi-storm Github Repo Thanks, Jobin George
... View more
12-29-2016
07:56 PM
1 Kudo
Agreed. But Does EvaluateXPath --> save to flowFile Content give you an option at all to deal with original incoming file? Something like below would do both i guess. EvaluateXpath(Set to Attributes) [match]--> PutFile/DoSomethingElse EvaluateXpath(Set to Attributes) [match]--> AttributesToJSON-->PutFIle(AttributesAsContent) Any suggestion/better way to make it better?
... View more