Member since
01-11-2016
355
Posts
232
Kudos Received
74
Solutions
11-02-2022
02:21 AM
I have seen that you save the MapRecord as string. By mistake, i saved it also as string due to wrong schema. My record looks like this, any idea how can convert it back to MapRecord from this string format: "[MapRecord[{name=John Doe, age=21, products=[Ljava.lang.Object;@f8495e3, type=end-user, description=This is an end user}]]" Thanks!
... View more
05-01-2018
08:35 PM
4 Kudos
DataWorks Summit (DWS) is the industry’s Premier Big Data Community Event in Europe and the US. The last DWS was in Berlin, Germany, on April 18th and 19th. This was the 6th year occurence in Europe and this year there was over 1200 attendees from 51 different countries, 77 breakouts in 8 tracks, 8 Birds-of-a-Feather sessions and 7 Meetups. I had the opportunity to attend as a speaker this year, where I gave a talk on “Best practices and lessons learnt from Running Apache NiFi”. It was a joint talk with the Big Data squad team from Renault, a French car manufacturer. The presentation recording will be available on the DWS website. In the meantime, I’ll share with you the 3 key takeaways from our talk. NiFi is an accelerator for your Big Data projects If you worked on any data project, you already know how hard it is to get data into your platform to start “the real work”. This is particularly important in Big Data projects where companies aim to ingest a variety of data sources ranging from Databases, to files, to IoT data. Having NiFi as a single ingestion platform that gives you out-of-the-box tools to ingest several data sources in a secure and governed manner is a real differentiator. NiFi accelerates data availability in the data lake, and hence accelerates your Big Data projects and business value extraction. The following numbers from Renault projects are worth a thousands words. NiFi enables new use cases NiFi is not only an ingestion tool. It’s a data logistics platform. This means that NiFi enables easy collection, curation, analysis and action on any data anywhere (edge, cloud, data center) with built-in end-to-end security and provenance. This unique set of features makes NiFi the best choice for implementing new data centric use cases that require geographically distributed architectures and high levels of SLA (availability, security and performance). In our talk, two exciting use cases were shared: connected plants and packaging traceability. NiFi flow design is like software development When I pitch NiFi to my customers I can see them get excited quickly. They start brainstorming instantly and ask if NiFi can do this or that. In this situation, I usually fire a NiFi instance on my MAC and start dragging and dropping a few processors in NiFi to simulate their use case. This is a powerful feature that fosters interactions between team members in the room and gets us to very interesting business and technical discussions. When people see the power of NiFi and all what we can easily achieve in short a timeframe, a new set of questions arise (especially from the very few skeptics in the room :)). Can I automate this task? Can I monitor my data flows? Can I integrate NiFi flow design with my development process? Can I “industrialize” my use case?. All these questions are legitimate when we see how powerful and easy to use NiFi is. The good news is that “Yes” is the answer to all previous questions. However, it’s important to put in place the right process to avoid having a POC that becomes a production (who has never lived this situation?)
The way I like to answer these questions is to show how much NiFi flow design is like software development. When a developer wants to tackle a problem, he starts designing a solution by asking : ‘what’s the best way to implement this?’. The word best here integrates aspects like complexity, scalability, maintainability, etc. The same logic applies to NiFi flow design. You have several ways to implement your use case and they are not equivalent. Once a solution is found, you will use NiFi UI as your IDE to implement the solution. Your flow is a set of processors just like your code or your algorithm is a set of instructions. You have “if then else” statements with routing processor, you have “for” or “while” loops with update attributes and self-relations, you have mathematical and logical operators with processors and Expression Langage, etc. When you build your flow you divide it into process groups similar to functions you use when you organize your code. This makes your applications easier to understand, to maintain, and to debug. You use templates for repetitive things like you build and use libraries across your projects. From this main consideration, you can derive several best practices. Some of them are generic software development practices, and some of them are specific to NiFi as “a programming language”. I share some good principals to use in this following slide: Final thoughts NiFi is a powerful tool that gives you business and technical agility. To master its power, it is important to define and to enforce best practices. Lots of these best practices can be borrowed directly from software engineering. Others are specific to NiFi. We have shared some of these ideas in deck available on the DWS webpage. Some of the ideas explained in the presentation have been discussed by other NiFi enthusiasts such as the excellent “Monitoring NiFi Series” by Pierre[1]. Various Flow Development Lifecycle (FDLC) [2] topics have been also covered by folks like Dan and Tim for NiPyAPI[3][4], Bryan for flow registry [5] and Pierre for NiFi CLI [6]. Other topics like NiFi design patterns requires a dedicated post that I’ll address in the future. Article initially shared on https://medium.com/@abdelkrim.hadjidj/best-practices-for-using-apache-nifi-in-real-world-projects-3-takeaways-1fe6912101db
... View more
Labels:
06-04-2019
09:52 AM
Waht If i wanted to put my parquet into an S£ instead of HDFS?
... View more
10-07-2017
08:02 AM
7 Kudos
Introduction This is part 3 of a series of articles on Data Enrichment with NiFi:
Part 1: Data flow enrichment with LookupRecord and SimpleKV Lookup Service is available here Part 2: Data flow enrichment with LookupAttribute and SimpleKV Lookup Service is available here Part 3: Data flow enrichment with LookupRecord and MongoDB Lookup Service is available here Enrichment is a common use case when working on data ingestion or flow management. Enrichment is getting data from external source (database, file, API, etc) to add more details, context or information to data being ingested. In Part 1 I showed how to use LookupRecord to enrich the content of a flow file. This is a powerful feature of NiFi based on the record based paradigm. For some scenarios we want to enrich the flow file by adding the result of the lookup as an attribute and not to the content of the flow file. For this, we can use LookupAttribute with a LookupService. Scenario We will be using the same retail scenario of the previous article. However, we will be adding the city of the store as an attribute to each flow file. This information will be used inside NiFi for data routing for instance. Let's see how we can use LookupAttribute to do it. Implementation We will be using the same GenerateFlowFile processor to generate data as well as the same SimpleKeyValueLookupService. In order to add the city of a store as an attribute, we will use a LookupAttribute with the follwing configuration: The LookupAttribute processor will use the value of the attribute id_store as a key and query the lookup service. The returned value will be added as the 'city' attribute. To make this work, the flow files should have an attribute 'id_store' before entering the lookup processor. Currently, this information is only in the content. We can use an EvaluateJsonPath to get this information from the content to attribute. The final flow looks as the following: Results To verify that our enrichment is working, let's see the attribute of after the EvaluateJsonPath and then the LookupAttribute:
... View more
Labels:
11-02-2017
02:45 PM
Thanks Abdel!
... View more
03-13-2017
11:26 AM
3 Kudos
Introduction NiFi Site to Site (S2S) is a communication protocol used to exchange data between NiFi instances or clusters. This protocol is useful for use case where we have geographically distributed clusters that need to communicate. Examples include: IoT: collect data from edge node (MiNiFi) and send them to NiFi for aggregation/storage/analysis Connected cars : collect data locally by city or country with a local HDF cluster, and send it back to a global HDF cluster in core Data Center Replication : synchronization between two HDP clusters (on prem/cloud or Principal/DR) S2S provides several benefits such as scalability, security, load balancing and high availability. More information can be found here Context NiFi can be secured by enabling SSL and requiring users/nodes to authenticate with certificates. However, in some scenarios, customers have secured and unsecured NiFi clusters that should communicate. The objective of this tutorial is to show two approaches to achieve this. Discussions on having secure and unsecured NiFi cluster in the same application are outside the topic of this tutorial. Prerequisites Let's assume that we have already installed an unsecure HDF cluster (Cluster2) that needs to send data to a secure cluster (Cluster1). Cluster1 is a 3 node NiFi cluster with SSL : hdfcluster0, hdfcluster1 and hdfcluster2. We can see the HTTPS in the URLs as well as the connected user 'ahadjidj'. Cluster2 is also a 3 nodes NiFi cluster but without SSL enabled : hdfcluster20, hdfcluster21 and hdfcluster22 Option 1: the lazy option The easiest way to get data from cluster 2 to cluster 1 is to use a Pull method. In this approach, cluster 1 will use a Remote Process Group to pull data from cluster 2. We will configure the RPG to use HTTP and no special configurations are required. However, data will go unencrypted over the network. Let's see how to implement this. Step 1: configure Cluster2 to generate data The easiest way to generate data in cluster 2 is to use a GenerateFlowFile processor. Set the File Size to something different from 0 and Run Schedule to 60 sec Add an ouput port to the canvas and call it 'fromCluster2' Connect and start the two processors At this level, we can see data being generated and queued before the output port Step 2: configure Cluster1 to pull data Add a RPG and configure it with HTTP addresses of the three Cluster2' nodes. Use HTTP as Transport Protocol and enable the transmission. Add a PutFile processor to grab the data. Connect the RPG to the PutFile and chose the 'fromCluster2' output when you are asked for. Right click on the RPG and activate the toggle next 'fromCluster2' We should see flow files coming from the RPG and buffering before the PutFile processor. Option 2: the secure option The first approach was easy to configure but data was sent unencrypted over the wire. If we want to leverage SSL and send data encrypted even between the two clusters, we need to generate and use certificates for each node in the Cluster2. The only difference here is that we don't activate SSL. Step 1: generate and add Cluster2 certs I suppose that you already know how to generate certificates for CA/nodes and add them to Truststore/KeyStore. Otherwise, there are several HCC articles that explain how to do it. We need to configure Cluster2 with its certificats Upload nodes' certificate to each node and add it to the KeyStore (eg. keystore.pfx). Set also the KeyStore type and password. Upload the CA (Certificate Authority) certificate to each node and add it to the TrustStore (eg. truststore.jks). Set also the TrustStore type and password. Step 2: configure Cluster2 to push data to Cluster1 In Cluster1, add an input port (toCluster1) and connect it to a PutFile processor. Use a GenerateFlowFile to generate data in Cluster2 and a RPG to push data to Cluster1. Here we will use HTTPS addresses when configuring the RPG. Cluster2 should be able to send data to Cluster1 via the toCluster1 input port. However, the RPG shows a Forbidden error Step 3: add policies to authorize cluster2 to use the S2S protocol The previous error is triggered because nodes belonging to Cluster2 are not authorized to access to Cluster1 resources. To solve the problem, let's do the following configurations: 1) Go the users menu in Cluster1 and add a user for each node from Cluster2 2) Go to the policies menu in Cluster1, and add each node from Cluster2 to the retrieve site-to-site details policy At this point, the RPG in Cluster2 is working however the input port is not visible yet 3) The last step is editing the input port policy in Cluster1 to authorize nodes from Cluster2 to send data through S2S. Select the toCluster1 Input port and click on the key to edit it's policies. Add cluster2 nodes to the list. 4) Now, go back to cluster2 and connect the GenerateFlowFile with the RPG. The input port should be visible and data start flowing "securely" 🙂
... View more
Labels: