Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
09-07-2018
09:44 PM
6 Kudos
Log Forwarding/Ingestion Patterns Log forwarding & ingestion is a key starting point for many logging initiatives such as log analytics, cyber
security, anomaly & bot detection, etc etc. This article will focus few (not comprehensive) patterns for log
forwarding/ingestion using NiFi.
Commonly rsyslog is used to capture and ship log messages.“Rsyslog is an open-source software utility used
on UNIX and Unix-like computer systems for forwarding log messages in an IPnetwork. It implements the
basic syslog protocol, extends it with content-based filtering, rich filtering capabilities, flexible configuration
options and adds features such as using TCP for transport.” More on how to configure rsyslog: here NiFi is able to ingest messages from rsyslog over TCP or UDP via ListenSysLog processor. This allows for
little to no coding. Patterns Pattern A A minimalist design. Rsyslog is configured to simply forward log messages to a NiFi cluster. Rsyslog
/etc/rsyslog.conf file needs to be configured to forward messages to a NiFi port identified in ListenSysLog
processor. Pattern B A MiNiFi listen socket design. MiNiFi is installed on the server(s) leveraging ListenSysLog processor. This
pattern offers end to end data linage along with more rich operational capabilities compared to Pattern A.
MiNiFi via ListenSysLog will capture rsyslog messages and ship them to NiFi via S2S (site 2 site). Rsyslog
is configured to simply forward log messages to a locally installed MiNiFi instance (localhost:port). Rsyslog
/etc/rsyslog.conf file needs to be configured to forward messages to a the local MiNiFi port identified in
ListenSysLog processor. This design will provide at least once message delivery guarantee. Pattern C A MiNiFi tail file design. MiNiFi is installed on the server(s) leveraging TailFile processor unlike Pattern B
using ListenSyslog. Both pattern A and B offer end to end data linage and rich operational capabilities.
MiNiFi will capture log messages by tailing a directory of files or a file and ship them to NiFi via S2S (site 2
site). Identify a log file to tail (ie /var/log/messages) or a directory for files, start MiNiFi and the log
messages will start flow from the server(s) to NiFi. This design will provide at least once message delivery guarantee. These are a few but common pattens I have developed & implemented in the field with success. Happy log
capturing!
... View more
Labels:
09-06-2018
01:56 PM
During launch of HDP or HDF on azure via cloudbreak, if the following provisioning error is thrown (Check cloudbreak logs): log:55 INFO c.m.a.m.r.Deployments checkExistence - [owner:xxxxx] [type:STACK] [id:2] [name:sparky] [flow:xxx] [tracking:] <-- 404 Not Found https://management.azure.com/subscriptions/xxxxxx/resourcegroups/spark. (104 ms, 92-byte body)/cbreak_cloudbreak_1 | 2018-09-05 14:25:22,882 [reactorDispatcher-24] launch:136 ERROR c.s.c.c.a.AzureResourceConnector - [owner:xxxxxx] [type:STACK] [id:2] [name:sparky] [flow:xxxxxx] [tracking:] Provisioning error: This means the instance type selected is not available within the region. Please change region where instance is available or change to instance type which is available within region.
... View more
Labels:
08-07-2018
03:19 PM
3 Kudos
Using cloudbreak to launch HDF (specifically NiFi), an error may prevent launching an instance. In the logs (/usr/hdf/current/nifi/logs/nifi-app.log) the following error is produced: pache.nifi.processors.hive.PutHiveStreaming could not be instantiated
java.util.ServiceConfigurationError: org.apache.nifi.processor.Processor: Provider org.apache.nifi.processors.hive.PutHiveStreaming could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
....
Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)
....
This is generally caused by by snappy library extracting to /tmp (generally defined in java.io.tmpdir). To fix this, do one of the following
Grant nifi user access to /tmp
Create a tmp directory where nifi user has access. For example, create tmp directory /home/tmp
Then go into ambari and find parameter Template for bootstrap.conf
Scroll past all the java.arg.* arguments Add the following java.arg.snappy=-Dorg.xerial.snappy.tempdir=/usr/hdf/current/nifi/tmp
That's it!
... View more
07-10-2018
07:29 PM
@Dominika Bialek This was helpful. As a follow up to others who may see this, you have to use the fully qualified name arn:aws:iam::AccountNumber:role/CredentialRole
... View more
03-27-2017
04:01 PM
3 Kudos
There are many ways to validate a json file against a avro schema to verify all is kosher. Sharing a practice I have been using for few years. Objective - Validate avro schema well bound to the json file First you must have a avro schema and json file. From there download the latest a avro-tools jar. At the moment 1.8.1 is the latest avro-tools version jar available. Store the avro schema and json file in the same directory. Issue a wget to fetch the avro-tools jar wget http://www.us.apache.org/dist/avro/avro-1.8.1/java/avro-tools-1.8.1.jar Here is what the directory looks like Objective Details - Validate avro schema student.avsc binds to student.json How - Issue the following java -jar ./avro-tools-1.8.1.jar fromjson --schema-file YourSchemaFile.avsc YourJsonFile.json > AnyNameForYourBinaryAvro.avro Using the student files example: java -jar ./avro-tools-1.8.1.jar fromjson --schema-file student.avsc student.json > student.avro Validation passed, a avro binary was created. Now as a last step lets break something. Another avro schema (student2.avsc) is created which does not conform to student.json. Lets verify the avro-tools jar will fails to build a avro binary As you can see from above output the avro binary failed to create due to validation errors
... View more
03-18-2017
05:27 PM
5 Kudos
HBase along with Phoenix is one of the most powerful NoSQL combinations. HBase/Phoenix capabilities allow users to host OLTPish workloads natively on Hadoop using HBase/Phoenix with all the goodness of HA and analytic benefits on a single platform (Ie Spark-hbase connector or Phoenix Hive storage handler). Often a requirement for HA implementations is a need for DR environment. Here I will describe a few common patterns and in no way is this the exhaustive HBase DR patterns. In my opinion, pattern 5 is the simplest to implement and provides operational ease & efficiency.
Here are some of the high level replication and availability strategies
with HBase/Phoenix
HBASE
provides High Availability within a cluster by managing region server failures
transparently.
HBASE
provides various cross DC asynchronous replication schemes
Master/Master
replication topology
Two
clusters replicating all edits, bi-directionally to each other
Master/Slave
topology replication
One
cluster replicating all edits to second cluster
Cyclic
topology for replication
A
ring topology for clusters, replicating all edits in an acyclic manner
Hub
and spoke topology for replication
A
central cluster replicating all edits to multiple clusters in a uni-directional
manner
Using
various topologies described above cross DC replication scheme can be setup as
per desired architecture
Pattern 1
Reads
& Writes served by both clusters
An
implementation of client to provide for stickiness for writes/reads based on a
session ID like concept needs to investigated
Master/Master
replication between clusters
Bidirectional
replication
Replication
post failover - recovery
instrumented via Cyclic
Replication
Pattern 2
Reads
served by both
clusters
Writes
served by single cluster
Master/Master
replication between clusters
Bidirectional
replication
Client
will failover to secondary cluster
Replication
post failover - recovery instrumented via Cyclic
Replication
Pattern 3
Reads
& Writes served
by single cluster Master/Master
replication between clusters Bidirectional
replication Client
will failover to secondary cluster Replication
post failover - recovery instrumented
via Cyclic
Replication
Pattern 4
Reads
& Writes served
by single cluster Master/Slave
replication between clusters Unidirectional
replication Client
will failover to secondary cluster Manual
resync required on ”primary” cluster due to unidirectional replication
Pattern 5 Ingestion via NiFi Rest API Supports handling secure calls and round trip responses Push data to Kafka to democratize data to all apps interested in data set Secure Kafka topics via Apache Ranger NiFi dual ingest into N number of HBase/Phoenix clusters Enables in-sync data stores Operational ease NiFi back pressuring will handle any ODS downtime UI flow orchestration Data Governance built in via Data Provenance Event level linage Additional HBase Replication Documentation
Monitor
replication status
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-replication-monitoring-status.html Replication metrics
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-replication-metrics.html Replication Configuration options
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-repl-repl-config-options.html HBaseReplication Internals
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-replication-internals.html HBase
Cluster Replication Details
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-repl-details.html
... View more
Labels:
03-18-2017
03:00 AM
2 Kudos
A often requested feature has popped up in HDP 2.5, Phoenix namespace mapping AKA Phoenix schemas. This feature allows creation of schemas and development within a schema work space. This guide will walk through steps to enable this feature and how to use it. Use Case: Create 2 schemea named "schema1" and "scheama2" Within both schemas create table customer. Create another schema named "schema3". Drop schema "schema3" Note - A schema can only be dropped if it is empty (does not host any tables). 1. Through Ambari go to HBase. Select Advanced tab
2. Add a parameter to hbase-site.xml. To do this go to "Custom hbase-site" and click on "Add Property" 3. Add the property Key: phoenix.schema.isNamespaceMappingEnabled
Value: true Click on Add button. A restart hbase service will be required.
4. Test the functionality, use sqlline.py. It is located under /usr/hdp/current/phoenix/bin Note - I am using /usr/hdp/2.5.0.0-1245/phoenix/bin since I want to prove to the audience I am in fact using HDP 2.5. 5. Create two schemeas named "schema1" and "schmea2" 6. Set work space to use schema1 and create table customer 7. Set work space to use schema2 and create table customer 8. Run a !table to view all schemas and tables 9. Create a schema "schema3" and drop "schema3" That's it! Simple and easy to use. Enjoy the new feature
... View more
03-06-2017
09:50 PM
6 Kudos
Previous article demonstrated how to use nifi rest api to create a template. The objective for this article is to demonstrate how to use NiFi rest api to change a work flow on a secured cluster real time without service disruption or downtime. Use Case A process group named CICD is a active data flow (data is flowing through it) and a new processor will be added at the end of a flow without application disruption or downtime. Comments in the picture above. Steps Create Access Token Create Client ID Fetch Process Group attributes (process group which the changes will be applied to) Create LogAttribute Processor Fetch the UUID of the UpdateAttribute processor
Why? Data will flow from this existing processor into newly added LogAttribute Processor
Link LogAttribute & UpdateAttribute via success relationship Update LogAttribute to terminate on success relationship Steps 1 & 2 & 3 These steps are described here so no reason to copy and paste. Step 4 - Create LogAttribute Processor payload {
"revision": {
"clientId": "556f3dcf-c8d4-1145-0d1b-539ef89a01da",
"version": 0
},
"component": {
"type": "org.apache.nifi.processors.standard.LogAttribute",
"name": "LogAttribute",
"position": {
"x": 1406.2231010674523,
"y": 1243.8565539964334
}
}
} API /nifi-api/process-groups/YourProcessGroupUUID/processors My process group UUID is a522b679-015a-1000-0000-00003f33fd93 Post curl -X POST -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9" -H "Connection: keep-alive" -H "Content-Length: 229" -H "Content-Type: application/json" -H "Host: nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"revision":{"clientId":"556f3dcf-c8d4-1145-0d1b-539ef89a01da","version":0},"component":{"type":"org.apache.nifi.processors.standard.LogAttribute","name":"LogAttribute","position":{"x":1406.2231010674523,"y":1243.8565539964334}}}' https://nifi.com:9091/nifi-api/process-groups/a522b679-015a-1000-0000-00003f33fd93/processors| gunzip Returns large json payload which include UUID of the newly created processor (LogAttribute) The new added processor will be displayed on the UI Step 5 - Fetch the UUID of the UpdateAttribute processor Fetch the UUID of the UpdateAttribute processor. In the next step a link (via success relationship) between the newly created LogAttribute processor to UpdateAttribute will be established through their respective UUID API /nifi-api/flow/process-groups/YourProcessGroupUUID Post curl -X GET -H "Accept: */*" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9" -H "Connection: keep-alive" -H "Host: nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" https://nifi.com:9091/nifi-api/flow/process-groups/a522b679-015a-1000-0000-00003f33fd93 | gunzip This will return large json message "processGroupFlow": {
"id": "a522b679-015a-1000-0000-00003f33fd93",
..........
}
},
"flow": {
....
"component": {
"id": "08873906-c9ca-1190-8376-b863cef4e797",
....
"name": "UpdateAttribute",
"type": "org.apache.nifi.processors.attributes.UpdateAttribute",
"state": "RUNNING",
... The UUID for UpdateAttribute processor is (Yours will be different) 08873906-c9ca-1190-8376-b863cef4e797 Step 6 - Link LogAttribute & UpdateAttribute via success relationship
Payload {
"revision": {
"clientId": "556f3dca-c8d4-1145-2ac4-3e59aa43f6ec",
"version": 0
},
"component": {
"name": "",
"source": {
"id": "08873906-c9ca-1190-8376-b863cef4e797",
"groupId": "a522b679-015a-1000-0000-00003f33fd93",
"type": "PROCESSOR"
},
"destination": {
"id": "556f687a-c8d4-1145-0000-000032d88e41",
"groupId": "a522b679-015a-1000-0000-00003f33fd93",
"type": "PROCESSOR"
},
"selectedRelationships": [
"success"
],
"flowFileExpiration": "0 sec",
"backPressureDataSizeThreshold": "1 GB",
"backPressureObjectThreshold": "10000",
"bends": [],
"prioritizers": []
}
} Source is the existing processor (UpdateAttribute) we want the flow files from and the destination is the newly added processor (LogAttribute) API /nifi-api/process-groups/ProcessGroupUUID/connections Post curl -X POST -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9=" -H "Connection: keep-alive" -H "Content-Length: 522" -H "Content-Type: application/json" -H "Host: hnifi.com:9091"
-H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"revision":{"clientId":"556f3dcf-c8d4-1145-0d1b-539ef89a01da","version":0},"component":{"name":"","source":{"id":"08873906-c9ca-1190-8376-b863cef4e797","groupId":"a522b679-015a-1000-0000-00003f33fd93","type":"PROCESSOR"},"destination":{"id":"556f687a-c8d4-1145-0000-000032d88e41","groupId":"a522b679-015a-1000-0000-00003f33fd93","type":"PROCESSOR"},"selectedRelationships":["success"],"flowFileExpiration":"0 sec","backPressureDataSizeThreshold":"1 GB","backPressureObjectThreshold":"10000","bends":[],"prioritizers":[]}}' https://nifi.com:9091/nifi-api/process-groups/a522b679-015a-1000-0000-00003f33fd93/connections | gunzip Through the UI the newly added processor (LogAttribute) is linked to existing UpdateAttribute processor. Notice the LogAttribute has a yellow caution democratization on the top left. The processor requires action on the success relationship. For this example success relationship will be set to terminate..meaning no other work beyond LogAttribute is required. Step 7 - Update LogAttribute to terminate on success relationship
Set auto terminate on success for the newly added processor API /nifi-api/processors/ProcessorUI PUT curl -X PUT -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9" -H "Connection: keep-alive" -H "Content-Length: 522" -H "Content-Type: application/json" -H "Host: nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"component":{"id":"556f687a-c8d4-1145-0000-000032d88e41","name":"LogAttribute","config":{"concurrentlySchedulableTaskCount":"1","schedulingPeriod":"0 sec","executionNode":"ALL","penaltyDuration":"30 sec","yieldDuration":"1 sec","bulletinLevel":"WARN","schedulingStrategy":"TIMER_DRIVEN","comments":"","autoTerminatedRelationships":["success"]},"state":"STOPPED"},"revision":{"clientId":"556f3dcf-c8d4-1145-0d1b-539ef89a01da","version":5}}' https://nifi.com:9091/nifi-api/processors/556f687a-c8d4-1145-0000-000032d88e41 | gunzip The UI will show logattribute is in "stopped" status and the "Automatically Terminate Relationships" success is checked off. Step 8 - Start the processor Payload {
"revision": {
"clientId": "a556f3dce-c8d4-1145-ff5c-6c6f302bd804",
"version": 5
},
"component": {
"id": "556f5bc4-c8d4-1145-0000-000059ff8789",
"state": "RUNNING"
}
}
id = processor UUID API /nifi-api/processors/ProcessorUI PUT curl -X PUT -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9" -H "Connection: keep-alive" -H "Content-Length: 522" -H "Content-Type: application/json" -H "Host: nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"revision":{"clientId":"556f3dcf-c8d4-1145-0d1b-539ef89a01da","version":5},"component":{"id":"556f687a-c8d4-1145-0000-000032d88e41","state":"RUNNING"}}' https://nifi.com:9091/nifi-api/processors/556f687a-c8d4-1145-0000-000032d88e41 | gunzip Picture belows shows the flow has been updated via rest api real time without any disruption or stoppage of workflow. Note - All of the actions described in this article may be performed via UI.
... View more
Labels:
03-06-2017
05:29 PM
8 Kudos
The objective of this article is to clearly demonstrate how to use NiFi's REST API to create a template on a secured cluster. I plan to publish multiple articles on using NiFi rest api based on community feedback around the use cases in the field. No pie in the sky type of knowledge share. Adding complexity to the workflows as I publish new articles and how they look via rest api publishing.
Use Case Setup:
Create template of my process group UVindex via rest api. UVindex resides within a process group hierarchy
Root->SunileManjeeProcessGroup->RestAPI Test->UNIndex
UNindex is a simply process groups which polls Chiago uv index data every 5 minutes and stores on hdfs. CurrentWeatherData is in the same process group but for this use case I only want to create template for UVIndex process group.
Steps
Fetch Access Token
Fetch Client ID
Fetch Process Group attributes
Create Snippet
Create Template
Step 1 - Fetch Access Token
Access token is required for rest api interactions on secured nifi cluster. Token will be used in the header for all rest api interactions
-H "Authorization: Bearer myLongToken
API
/nifi-api/access/token
Fetch Access Token
curl -X POST -H "Accept: */*" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Connection: keep-alive" -H "Content-Length: 38" -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8" -H "Host: https://nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d 'username=myUserName&password=YourPasswordHere' https://nifi.com:9091/nifi-api/access/token | gunzip
Step 2
Fetch your client ID
. Client ID will be used for all rest api interactions
curl -X GET -H "Accept: */*" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJjbj1" -H "Connection: keep-alive" -H "Host: hdf20-0.field.hortonworks.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" https://nifi.com:9091/nifi-api/flow/client-id | gunzip
Step 3- Fetch Process Group Attributes
Fetch the UUID of the process group. To create a template a UUID of the object you want to create a template for is required. API
/flow/process-groups/parentProcessGroupUUID
Or Rest API..Here my parent process group UUID is
8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a
Fetch Process Group Info
curl -X GET -H "Accept: */*" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJjbj1OaUZpIFVzZXI" -H "Connection: keep-alive" -H "Host: https://nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" https://nifi.com:9091/nifi-api/flow/process-groups/8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a | gunzip
Which returns large json object. Here you can parse the json object for the name of the process group (UVindex) and grab the
UUID and version.
Step 4 - Create Snippet
To create a template, first a snippet must be created
. The snippet ID will be used to name and create template. All details of what a template should be composed of are within a snippet
API
/nifi-api/snippets
For the payload populate PartentGroupID using the parent process group UUID. Under "processGroups" populate with UUID of the process group you want to create a template of, your client ID, and version captured in step 3. You may have connections or other objects which may require values within the payload. For this template it does not. In follow up articles I will create higher complexity work flows which will require such objection to be populated during snippet creation.
{
"snippet": {
"parentGroupId": "8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a",
"processors": {},
"funnels": {},
"inputPorts": {},
"outputPorts": {},
"remoteProcessGroups": {},
"processGroups": {
"d8b73f15-e918-1afc-9db2-e7cdfbf19076": {
"clientId": "980e3bca-b76a-183f-2e3b-5e75dccd3aec",
"version": 1
}
},
"connections": {},
"labels": {}
}
}
Example
curl -X POST -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzd" -H "Connection: keep-alive" -H "Content-Length: 305" -H "Content-Type: application/json" -H "Host: nifi.com:9091" -H "Referer: https://hdf20-0.field.hortonworks.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"snippet":{"parentGroupId":"8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a","processors":{},"funnels":{},"inputPorts":{},"outputPorts":{},"remoteProcessGroups":{},"processGroups":{"d8b73f15-e918-1afc-9db2-e7cdfbf19076":{"clientId":"980e3bca-b76a-183f-2e3b-5e75dccd3aec","version":1}},"connections":{},"labels":{}}}' https://nifi.com:9091/nifi-api/snippets | gunzip
Output
{
"snippet": {
"id": "980e46c5-b76a-183f-ffff-ffffb4770059",
"uri": "https://nifi.com:9091/nifi-api/process-groups/8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a/snippets/980e46c5-b76a-183f-ffff-ffffb4770059",
"parentGroupId": "8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a",
"processGroups": {
"d8b73f15-e918-1afc-9db2-e7cdfbf19076": {
"clientId": "980e3bca-b76a-183f-2e3b-5e75dccd3aec",
"version": 1
}
},
"remoteProcessGroups": {},
"processors": {},
"inputPorts": {},
"outputPorts": {},
"connections": {},
"labels": {},
"funnels": {}
}
}
Fetch the snippet ID from the output
Step 5 - Create Template
Using the snippet ID from step 4 create a template named "My Template"
Payload
{
"name": "My Template",
"description": "",
"snippetId": "980e46c5-b76a-183f-ffff-ffffb4770059"
}
API
/nifi-api/process-groups/ParentProcessGroupUUID/templates
Example
curl -X POST -H "Accept: application/json, text/javascript, */*; q=0.01" -H "Accept-Encoding: gzip" -H "Accept-Language: en-US,en;q=0.5" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJz" -H "Connection: keep-alive" -H "Content-Length: 90" -H "Content-Type: application/json" -H "Host: nifi.com:9091" -H "Referer: https://nifi.com:9091/nifi/login" -H "X-Requested-With: XMLHttpRequest" -d '{"name":"My Template","description":"","snippetId":"980e46c5-b76a-183f-ffff-ffffb4770059"}' https://nifi.com:9091/nifi-api/process-groups/8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a/templates | gunzip
Which returns
{
"template": {
"uri": "https://nifi.com:9091/nifi-api/templates/e3cad030-733e-3568-b7f5-90b2457e1637",
"id": "e3cad030-733e-3568-b7f5-90b2457e1637",
"groupId": "8c3d29aa-d1ea-1c85-ffff-ffffe6b3853a",
"name": "My Template",
"description": "",
"timestamp": "03/05/2017 05:29:21 UTC",
"encoding-version": "1.0"
}
}
In the NiFi UI you will see the newly created template
... View more
Labels: