Member since
02-07-2019
2719
Posts
237
Kudos Received
31
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1016 | 08-21-2025 10:43 PM | |
| 1773 | 04-15-2025 10:34 PM | |
| 4705 | 10-28-2024 12:37 AM | |
| 1852 | 09-04-2024 07:38 AM | |
| 3676 | 06-10-2024 10:24 PM |
04-02-2023
09:42 PM
@swanifi, Have any of the replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
03-31-2023
12:35 PM
hello, thanks for your help, i have tried your proposal using --useSSL=false, but it did not work for me. Unrecognized argument: --useSSL=false Unrecognized argument: -useSSL=false I have solved my issue by using: java version "1.8.0_60" Java(TM) SE Runtime Environment (build 1.8.0_60-b27) Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode) Instead of openjdk version "1.8.0_352" OpenJDK Runtime Environment (build 1.8.0_352-b08) OpenJDK 64-Bit Server VM (build 25.352-b08, mixed mode)
... View more
03-31-2023
11:34 AM
Can you isolate any connection issues between your NN and DN pods? Maybe you can try doing an nc or telnet to the NN port from the DN pod?
... View more
03-31-2023
02:32 AM
@ABel-asd as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-31-2023
02:27 AM
Hi , Thank you for your assistance with this matter. The answers to your questions is as follows: Is that complete stack trace from the nifi-app.log? No, the complete stack trace is the following one: 2023-03-29 10:02:21,002 ERROR [Timer-Driven Process Thread-24] o.a.n.p.standard.PartitionRecord PartitionRecord[id=3be1c42e-5fa9-3144-3365-f568bb616028] Processing halted: yielding [1 sec]
java.lang.IllegalArgumentException: newLimit > capacity: (92 > 83)
at java.base/java.nio.Buffer.createLimitException(Buffer.java:372)
at java.base/java.nio.Buffer.limit(Buffer.java:346)
at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107)
at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:235)
at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:67)
at org.xerial.snappy.Snappy.compress(Snappy.java:156)
at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:78)
at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:167)
at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:168)
at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:59)
at org.apache.parquet.column.impl.ColumnWriterBase.writePage(ColumnWriterBase.java:387)
at org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:186)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:29)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:185)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:124)
at org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:319)
at org.apache.nifi.parquet.record.WriteParquetResult.close(WriteParquetResult.java:69)
at java.base/jdk.internal.reflect.GeneratedMethodAccessor983.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240)
at com.sun.proxy.$Proxy316.close(Unknown Source)
at org.apache.nifi.processors.standard.PartitionRecord.onTrigger(PartitionRecord.java:274)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1356)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)
at org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:59)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829) What version of Apache NiFi? Currently running on Apache NiFi open source 1.19.1 What version of Java? Currently running on openjdk version "11.0.17" 2022-10-18 LTS Have you tried using ConsumeKafkaRecord processor instead of ConsumeKafka --> MergeContent? No I did not, but for a good reason. The files coming out of Kafka require some "data manipulation" before using PartitionRecord, where I have defined the CSVReader and the ParquetRecordSetWriter. If I were to use ConsumeKafkaRecord, I would have to define a CSV Reader and the Parquet(or CSV)RecordSetWriter and the result will be very bad, as the data is not formatted as per the required schema. I will give it a try with ConsumeKafkaRecord using CSVReader and CSVRecordSetWriter, to see if I still encounter the same issue. Do you have issue only when using the ParquetRecordSetWriter? Unfortunately I can only test with parquet as this file format is somehow mandatory for the current project. I will try to reproduce the flow with an AVRO format, to see if I can reproduce the error or not. How large are the FlowFiles coming out of the MergeContent processor? So directly out of Kafka, 1 FlowFile has around 600-700 rows, as text/plain and the size is 300-600KB. Using MergeContent, I combine a total of 100-150 files, resulting in a total of 50MB. Have you tried reducing the size of the Content being output from MergeContent processor? Yes, I have played with several combinations of sizes and most of them either resulted in the same error or in an "to many open files" error.
... View more
03-30-2023
06:41 AM
Hi @sat_046 As i mentioned earlier comment, unfortunately it is not possible to delay the tasks. You can find the Spark code when tasks failed. https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L879C8-L1002 Please accept the solution if you liked my answer.
... View more
03-28-2023
09:39 AM
Well, there you have it 🙂 Your problem is not related directly to NiFi and it is caused by the executed SQL statement. I am not very experienced with SQL Servers but you could try selecting all the fields from the table and see if you still encounter an error message. If the error still persists, you could use the convert function on the where clause --> CONVERT(datetime,your_value,25) or select convert(varchar, your_value, 25)
... View more
03-28-2023
04:34 AM
@swanifi, Welcome to our community! To help you get the best possible answer, I have tagged in our NiFi experts @ckumar @MattWho @SAMSAL @cotopaul may be able to assist you further. Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.
... View more
03-27-2023
10:04 AM
@NafeesShaikh93 Interesting Use case you have. I am not all that familiar with all the methods the Graylog offers for ingesting logs from other servers. I'd assume Syslog is one of them? If so, NiFi offers. putSyslog processor. Looking at the dataflow you build thus far, I am not sure what you are trying to accomplish. The LogAttribute and logMessage processors allows you to write a log entry in a NiFi log defined by an appender and logger in the logback.xml NiFi configuration file. By default these log lines would end up in the nifi-app.log. You could however add an additional appender and the a custom logger to send log lines produced by these processors classes to the new appender thus isolating them from the other logging in the nifi-app.log. There is no way to setup a specific logger by processor on canvas. So every logAttribute and logMessage processor you use will write to the same destination NiFi appender log. The classes for the logAttribute and logMessage processors are: org.apache.nifi.processors.standard.LogAttribute
org.apache.nifi.processors.standard.LogMessage NiFi also has a tailFile processor that can tail a log file and create FlowFiles with that log entries at content. You could then use PutSyslog processor to send those log lines to yoru Graylog server possibly. The above design involves extra disk I/O that may not be necessary since you could possibly design your flow to create FlowFile attributes with all the file information you want to send to GrayLog an then use a replaceText at end of successful dataflow to replace the content of yoru FlowFile with a crafted syslog formatted content from those attributes and send directly to Graylog via the PutSyslog processor. This removes the need to write to a new logger and consume from that new log before sending o syslog. But again this is a matter of preference. Perhaps in you case maybe you want a local copy of these logs as well. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-27-2023
02:09 AM
@TB_19 as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more