Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Flowfile repository failed to update errors

avatar
Rising Star

I'm getting this error in my logs. Anyone knows what causes this or how to prevent it? Disk has plenty of space on it.

2017-02-14 17:04:04,687 ERROR [Timer-Driven Process Thread-2] o.a.n.p.s.FetchDistributedMapCache FetchDistributedMapCache[id=e69c1dbb-1011-1157-f

7f1-321d05a0a0f7] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update: or

g.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update

2017-02-14 17:04:04,687 ERROR [Timer-Driven Process Thread-2] o.a.n.p.s.FetchDistributedMapCache 

org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:369) ~[nifi-framework-core-1.1.1.jar:1

.1.1]

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:305) ~[nifi-framework-core-1.1.1.jar:1

.1.1]

        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) ~[nifi-api-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar

:1.1.1]

        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:

1.1.1]

        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1

.jar:1.1.1]

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]

        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]

        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

Caused by: java.io.IOException: All Partitions have been blacklisted due to failures when attempting to update. If the Write-Ahead Log is able to

 perform a checkpoint, this issue may resolve itself. Otherwise, manual intervention will be required.

        at org.wali.MinimalLockingWriteAheadLog.update(MinimalLockingWriteAheadLog.java:220) ~[nifi-write-ahead-log-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:210) ~[nifi-fram

ework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:178) ~[nifi-fram

ework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:363) ~[nifi-framework-core-1.1.1.jar:1

.1.1]

        ... 13 common frames omitted

I'm also seeing this error which may or may not be related (UTFDataFormatException: encoded string too long: 87941 bytes)

2017-02-14 17:08:44,567 ERROR [Timer-Driven Process Thread-7] o.a.n.p.s.FetchDistributedMapCache 

org.apache.nifi.processor.exception.ProcessException: FlowFile Repository failed to update

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:369) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:305) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) ~[nifi-api-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]

        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101]

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]

        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

Caused by: java.io.IOException: Failed to write field 'Repository Record Update'

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeRecordFields(SchemaRecordWriter.java:46) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeRecord(SchemaRecordWriter.java:35) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.serializeRecord(SchemaRepositoryRecordSerde.java:95) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.serializeEdit(SchemaRepositoryRecordSerde.java:67) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.serializeEdit(SchemaRepositoryRecordSerde.java:46) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.wali.MinimalLockingWriteAheadLog$Partition.update(MinimalLockingWriteAheadLog.java:957) ~[nifi-write-ahead-log-1.1.1.jar:1.1.1]

        at org.wali.MinimalLockingWriteAheadLog.update(MinimalLockingWriteAheadLog.java:238) ~[nifi-write-ahead-log-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:210) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:178) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:363) ~[nifi-framework-core-1.1.1.jar:1.1.1]

        ... 13 common frames omitted

Caused by: java.io.IOException: Failed to write field 'Attributes'

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeRecordFields(SchemaRecordWriter.java:46) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldValue(SchemaRecordWriter.java:131) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldRepetitionAndValue(SchemaRecordWriter.java:57) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeRecordFields(SchemaRecordWriter.java:44) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        ... 22 common frames omitted

Caused by: java.io.UTFDataFormatException: encoded string too long: 87941 bytes

        at java.io.DataOutputStream.writeUTF(DataOutputStream.java:364) ~[na:1.8.0_101]

        at java.io.DataOutputStream.writeUTF(DataOutputStream.java:323) ~[na:1.8.0_101]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldValue(SchemaRecordWriter.java:108) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldRepetitionAndValue(SchemaRecordWriter.java:57) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldValue(SchemaRecordWriter.java:124) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeFieldRepetitionAndValue(SchemaRecordWriter.java:84) ~[nifi-schema-utils-1.1.1.jar:1.1.1]

        at org.apache.nifi.repository.schema.SchemaRecordWriter.writeRecordFields(SchemaRecordWriter.java:44) ~[nifi-schema-utils-1.1.1.jar:1.1.1]
1 ACCEPTED SOLUTION

avatar
Master Guru

The second error looks like it is related to this bug:

https://issues.apache.org/jira/browse/NIFI-3389

Essentially you probably have a long attribute value that is exceeding 65535 bytes.

Not sure if that is what is resulting in the flow file repo blacklisting the partitions, but it could be related.

View solution in original post

7 REPLIES 7

avatar
Master Guru

The second error looks like it is related to this bug:

https://issues.apache.org/jira/browse/NIFI-3389

Essentially you probably have a long attribute value that is exceeding 65535 bytes.

Not sure if that is what is resulting in the flow file repo blacklisting the partitions, but it could be related.

avatar
Rising Star

Hey @Frank Maritato

Unfortunately there is currently a bug preventing individual attribute names and values from being over 65535 bytes long when encoded as UTF-8. [1] There is currently a pull request under review to take care of the issue.

Even when the issue is fixed though, it is generally not advisable to have a lot of data in the attributes because they are kept in-memory. Large values should usually be kept in the flowfile content if possible.

Thanks,

Bryan

[1] NIFI-3389

avatar
Rising Star

when the latter happens, my nifi gets into a state where active threads are permanently stuck and I have to restart the server to recover.

avatar
Rising Star

Thanks for the quick replies! This is very helpful.

Yes, I'm storing a fairly large value in an attribute, but maybe you guys can suggest an alternate approach? What I'm doing right now is processing survey results. Unfortunately, the survey question data and the responses are coming in as separate streams. I want to be able to join these two data sets while in my Nifi flow so I don't have to kick off a separate ETL.

So, what I chose to do is store the question data in the distributed map cache and then as each response comes in, query the cache by the survey id and assuming it is found, put the question data into an attribute. Then, an ExecuteScript runs to join the flowfile content (the response) and the attribute value (questions).

Is there another, more scalable way, to do this?

avatar
Master Guru

Right now you have FetchDistributedMapCache -> ExecuteScript... you could replace this with one custom processor (or maybe one custom scripted processor) that uses DistributedMapCache to fetch the questions, joins it with the response, and writes the whole thing to flow file content, thus avoiding ever sticking the questions in an attribute.

avatar
Rising Star

Ah, thanks @Bryan Bende. I remember seeing some sample code on how to connect to DMC from within ExecuteScript. My script is in python...do you know offhand if ExecuteScript/jython includes libraries installed via pip? I'd like to write a library for interacting with DMC in python so that my actual join script isn't as complicated.

avatar
Rising Star

Nevermind, I see the 'module directory' property in ExecuteScript.