I would like to know what alternatives I have and how I need to proceed so that a FlowFile from the ExecuteSQL Processor is transformed from AVRO to TXT (CSV).
I tried to use ConvertRecord Processor but AvroReader, even using Embedded Avro Schema, gives the following error:
ConvertRecord[id=be0735bf-babb-134a-acfa-71df3952b30d] Failed to process StandardFlowFileRecord[uuid=610e5f55-7040-45d5-8c79-d734dbd589d2,claim=StandardContentClaim [resourceClaim=179,sectionidresource12,container=12 container=12 =112], offset=53226, length=1281],offset=0,name=46842655031134935,size=1281]; will route to failure: org.apache.nifi.processor.exception.ProcessException: Cannot write Schema Name As Attribute because the Schema Name is not known
I have an HDP version 2.6.5 and the NiFi version is 22.214.171.124.1.2.0-7
First think to note is the age of the NiFi version being used. HDF 3.1.2 was released back in 2018. There have been many improvements and advancements within NiFi since then including to the record based processors and controller services.
1. Does your source AVRO being produced via the ExecuteSQL processor contain the avro schema embedded in the content? 2. The exception you shared deals with writing the Avro Schema which is a function of the record writer and not the ConvertRecord or Record reader. I am assuming you are using the CSVRecordSetWriter in your use case. What do you have the "Schema Write Strategy" set to? Do you get same exception if you set this to "Set 'avro.schema' Attribute" instead?
If above does not help, sharing the exact configuration of your convertRecord processor and controller services being utilized may help those in the community offer additional guidance. Of course a sample source file example would also be helpful.
If you found this response assisted with your query, please take a moment to login and click on "Accept as Solution" below this post.