Member since
06-20-2016
13
Posts
1
Kudos Received
0
Solutions
03-11-2022
04:08 AM
hi, this isn't working as it is on nifi 1.14, can you give me a hand please? i used a "generateFlowfile" with some random text, and connected to executeScript but get the following: ExecuteScript[id=78c5739f-017f-1000-0000-0000016ca301] ExecuteScript[id=78c5739f-017f-1000-0000-0000016ca301] failed to process due to javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25; rolling back session: java.lang.NullPointerException ↳ causes: Traceback (most recent call last): File "<script>", line 25, in <module> java.lang.NullPointerException java.lang.NullPointerException: java.lang.NullPointerException ↳ causes: javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25 ↳ causes: org.apache.nifi.processor.exception.ProcessException: javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25
... View more
10-02-2016
03:48 AM
4 Kudos
@Luis Valdeavellano Those warnings have nothing to do with your issue, but it is good to fix them anyway. 1. If you don't want your job to use all the resources from the cluster, then define a separate YARN queue for Spark jobs and submit it to that queue. I already assume that you submit Spark job via YARN. Obviously, your job will max out that Spark queue resources, but the resources not assigned to that queue could still be used by others. You still have your problem, but others can still execute jobs. 2. Look at your job and determine why is using so much resources, redesign it, tune it, break it in small pieces, etc. If the job is well tuned then your cluster does not have enough resources. Check resource use during the execution of the job to determine the bottleneck (RAM, CPU, etc).
... View more
06-30-2016
06:16 AM
Thanks, that made it
... View more