Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Where are Nifi attributes written?

avatar
Expert Contributor

Some processor's written attributes are readily available within the FIowFile attributes downstream. For example 'executesql.row.count' is populated after ExecuteSQL. I'm not seeing the same behavior with many of the other attributes such as SplitJson. Are we expected to use Groovy script or some other customer process to extract these values? A simple example would be appreciated.

1 ACCEPTED SOLUTION

avatar
Master Guru

Each processor is responsible for reading and writing whichever attributes it wants to for the purposes of its processing, and those attributes are available in each processor's documentation. SplitJson for example writes the following attributes to each output flow file:

NameDescription
fragment.identifierAll split FlowFiles produced from the same parent FlowFile will have the same randomly generated UUID added for this attribute
fragment.indexA one-up number that indicates the ordering of the split FlowFiles that were created from a single parent FlowFile
fragment.countThe number of split FlowFiles generated from the parent FlowFile
segment.original.filenameThe filename of the parent FlowFile

These were added to NiFi 1.0.0 (HDF 2.0) under NIFI-2632, so if you are using a version of NiFi/HDF before that, that's why you won't see these attributes populated by SplitJson.

View solution in original post

6 REPLIES 6

avatar
Guru

You access all attributes using the expression language, typically to use attributes and their derivation as values in other attributes of a processor:

See this for an excellent overview of attributes, how they change with the lifetime of a flow and how they provide programmatic power in your flows: https://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.1.1/bk_HDF_GettingStarted/content/working-with-...

If this is what you are looking for, let me know by accepting the answer; else, let me know of any gaps or followup questions.

avatar
Expert Contributor

Thanks for the prompt reply. I've also tried to address this question in item:

https://community.hortonworks.com/questions/68745/nifi-iteration-of-queue-entries-between-processors...

I'm trying to get similar behavior as can be seen below for 'executesql.row.count' by using a user-defined property (property - iteration : value - ${'fragment.index'} <with and without single quote, set prior to 'SplitJson' or set post 'SplitJson')

I'm only able to get 'No value set' or 'Empty value set' no matter what I try.

10133-capture.png

The splitjson is very straight forward and successfully builds many json array objects using the expression below.

10134-capture.png

All I'm attempting to due is keep track of the queue position, so I can act on the last row to post the last transaction date. The attribute 'queue position' would also be of interest, but it also contains no data.

10135-capture.png

Thanks,

~Sean

avatar
Master Guru

Each processor is responsible for reading and writing whichever attributes it wants to for the purposes of its processing, and those attributes are available in each processor's documentation. SplitJson for example writes the following attributes to each output flow file:

NameDescription
fragment.identifierAll split FlowFiles produced from the same parent FlowFile will have the same randomly generated UUID added for this attribute
fragment.indexA one-up number that indicates the ordering of the split FlowFiles that were created from a single parent FlowFile
fragment.countThe number of split FlowFiles generated from the parent FlowFile
segment.original.filenameThe filename of the parent FlowFile

These were added to NiFi 1.0.0 (HDF 2.0) under NIFI-2632, so if you are using a version of NiFi/HDF before that, that's why you won't see these attributes populated by SplitJson.

avatar
Expert Contributor

Ah, I see we're at NiFi version 0.5.1.1.1.2.1-34, so that explains why I am not seeing these attributes.

avatar
Expert Contributor

For those who may be at a back level HDF version as we are a good workaround is to use the SplitContent instead as it utilizes many of the attributes Matt has documented above for the SplitJson processor.

avatar
Expert Contributor

@Matt Burgess You are correct about my being back level for this support. Can anyone suggest a workaround, such as another processor attribute that could do something similar?