I want to use ExecuteStreamCommand to submit a spark job via the shell, and I want to use GenerateFlowFile so that I can detect the spark job failure and RouteOnAttribute as suggested by Matt's answer here.
I think it worked for detecting failure, but I can't make it scheduled correctly.
If i want the whole flow (generation of the flow file, the ExecuteStreamCommand and Routng) to be executed every 1 minute, should I schedule the GenerateFlowFile every 1 minute and leave the ExecuteStreamCommand as default (0 schedule) or should I schedule both.
I tried different combinations but it didn't work properly, I think the GenerateFlowFile keeps generating flow files but the ExecuteStreamCommand don't run multiple times.
another problem is that when I stop the ExecuteStreamCommand processor, it gets stuck, I can't change its configuration and I can't stop or start it again, It didn't work again until I restart NiFi.
I don't understand exactly what do you mean.
In a previous question the answer was :
"You could schedule a GenerateFlowFile at the same rate your ExecuteProcess was scheduled for, and set Ignore STDIN to true in ExecuteStreamCommand. Then the outgoing flow files will have the execution.status attribute set, which you can use with RouteOnAttribute to handle failures (non-zero exit codes, e.g.)"
I want to know how to schedule these 2 processors together so that the result is that the flow is executed every 1 minute.
Run the processors like this. First processor, GenerateFlowFile, every minute of every hour
Then the next processor should run the first second of every minute of every hour
And then the last processor the second second of every minute of every hour
Do you follow?
@Wynner Ok the schedule seems to be working, when the submitted job fails it works fine and the flow is ok.
once the job run without errors, flow files keeps generated every minute, but the ExecuteStreamCommand is stuck. I can't even stop or start it, I need to restart NiFi to run it again.
When I try to stop/start ExecuteStreamCommand it says: "No eligible components are selected. Please select the components to be stopped."
Here's what I'm trying to illustrate:
one successful execution at "ExecuteStreamCommand" then it gets stuck (flow files keeps generated but ExecuteStream is stuck):
If no successful executions happens at all (All executions failed) the schedule works well as follows (flow files generated every minute, and executeStreamCommand executes every minute):
I don't know why it gets stuck in the first case ? please help.
The reason you cannot stop the ExecuteStreamCommand processor, is that it still has a running thread. How long does it take to run your script outside of NiFi? It seems like the script is not finishing, so the ExecuteStreamCommand processor it just waiting.
When you say about a minute, does that mean less than a minute or more than a minute? Why don't you try generating a flow file every 2 minutes and see if that works better? Or is it possible to run the script in parallel? Give the ExecuteStreamCommand processor 2 concurrent tasks instead of one.
In my experience, if you aren't making a call to a system level command, then the processor does have an issue sometimes.
Try putting the actual "spark-submit <path to jar>" into a shell script and then call the shell script in the ExecuteStreamCommand processor. I have found that method more reliable.