Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

putHDFS NIFI processor is getting failed with Lease exception, Lease holder is trying to recreate the HDFS file



We have a NIFI flow where we are sourcing the social media surveys from an API and writing them to HDFS via PutHDFS processor in with conflict resolution strategy as "append". This flow works if surveys are coming 1 by 1 with a second or 2 seconds delay. We want to test some 20000 surveys all coming at once and "PutHDFS" processor is failing for this scenario. Error is given below:

WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.append: failed to create file XXXXXXXXXXXX for DFSClient_NONMAPREDUCE_XXXXXXXXX because current leaseholder is trying to recreate file. PriviledgedActionException as:user@XXXXXXXXX (auth:KERBEROS) cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file XXXXXXXXXXX for DFSClient_NONMAPREDUCE_XXXXXXXX for client XXXXXXXX because current leaseholder is trying to recreate file.

INFO org.apache.hadoop.ipc.Server: IPC Server handler 14 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.append from XXXXXXXX Call#XXXXX Retry#0: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file XXXXXXXXX for DFSClient_NONMAPREDUCE_XXXXXXXX because current leaseholder is trying to recreate file.

With these exception all the records are getting blocked in nifi queue to puthdfs and eventually they are not writing into HDFS. Is there a way to configure Nifi PutHDFS processor to accomodate this use-case? Rt now its configured under scheduling as "Timer Driven", Concurrent tasks as "1" and with run schedule as 0 seconds. Yield duration is 1 second.

Please suggest.




Is your NiFi clustered?


@Bryan BendeHi Bryan. We are testing this in DEV and it has only 1 NIFI Node. However the puthdfs cluster has 4 datanodes. Prod we will have 2 nifi nodes and 5 datanodes. Thanks

Ok I was asking because sometimes people end up trying to append to the same file from multiple NiFi nodes which will result in a similar error, but sounds like that shouldn't be the case here.

You may want to avoid the append scenario all together and use MergeContent in NiFi to merge a bunch of data before writing to HDFS, and set a unique filename using a timestamp or hostname or some other piece of info.


@Bryan Bende I liked MergeContent option as you suggested. But Please clarify this. In production surveys will come real time as soon as customer write the survey we want to see in HDFS. So my use-case is during 24 hour period which is per day I want to see only 1 file in HDFS and as soon as Surveys were posted I should see that survey in HDFS. If I use Merge Content processor will that be still considered Real-Time? I am guessing it will wait until data reach certain threshold, upon which merge will happen and write to HDFS? During a day there will be times where no surveys at all or bunch of surveys coming at the same time or 1 survey per second. Thanks Srikaran.

Your description is correct... using MergeContent would introduce some amount of latency which would be based on how you configure it to merge based on time or size, and how fast your data is coming in.

Maybe you can still use the "append" option in PutHDFS, but since you would use MergeContent first, it would mean that the appends would happen less frequently and would probably work since you said it was working at slower rates.

You'll still need to be careful of multiple nodes appending to the same file in your production scenario. Typically people use an UpdateAttribute processor to modify the filename property and add ${hostname()}, so that each node would be appending to a separate file.


Agreed. Thanks for suggestion. For now it seems I have a work around by changing the run schedule from 0 seconds to 1 seconds and I dont see Lease holder exception. Even though there is a little latency in writing to HDFS unlike 0 seconds but error has gone. I will work on your suggestion for production. Thanks for help! Srikaran

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.