Created 09-05-2018 09:32 AM
While executing a spark + scala job in NiFi, flow is stopped because of below error:
"ERROR KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!"
Note: We are not using RANGER KMS.
The above is the bug related to error and we are using HDFS -> 2.7.3 and HDP -> 18.104.22.168
Could someone help me on this.
Issue : Same .sh file is executing fine from command line even it prompts ERROR message and failing in nifi. Please see attached screenshots for more info. Case 1 : Working from command level (execution of .sh file which consists spark-submit action by passing .scala file)
Case 2 : Not working from nifi (execution of .sh file which consists spark-submit action by passing .scala file) with ExecuteStreamCommand processor
It is a tough decision to upgrade to hdp 2.6.5 and can I get any workaround for this issue?
Is it possible from NiFi processor settings to make workflow ignore the above Error??
This error is really not an error and if there is a workaround or solution from NiFi end to ignore this error then it would be a better solution.
It is a tough decision for now to go for upgrade. Any feasible workaround either from apache NiFi or Hadoop?
Also, is there a way in apache NiFi processor settings to direct NiFi to ignore the mentioned ERROR.
NiFi server is running as root user and we are using apache version of NiFi and we are not using HDF.
a) NiFi is running as root user:
[root@XXXXX ~]# ps -ef | grep NiFi root 29171 29094 0 23:11 pts/3 00:00:00 grep --color=auto NiFi root 33348 33346 0 Aug15 ? 00:22:17 /usr/bin/java -cp /opt/nifi-1.7.0/nifi-1.7.0/conf:/opt/nifi-1.7.0/nifi-1.7.0/lib/bootstrap/* -Xms12m -Xmx24m -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi-1.7.0/nifi-1.7.0/logs -Dorg.apache.nifi.bootstrap.config.pid.dir=/opt/nifi-1.7.0/nifi-1.7.0/run -Dorg.apache.nifi.bootstrap.config.file=/opt/nifi-1.7.0/nifi-1.7.0/conf/bootstrap.conf org.apache.nifi.bootstrap.RunNiFi start root 33365 33348 18 Aug15 ? 3-21:01:38 java -classpath /opt/nifi-1.7.0/nifi-1.7.0/./conf:/opt/nifi-1.7.0/nifi-1.7.0/./lib/nifi-nar-utils-1.7.0.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/jetty-schemas-3.1.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/nifi-api-1.7.0.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/jul-to-slf4j-1.7.25.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/jcl-over-slf4j-1.7.25.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/nifi-properties-1.7.0.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/nifi-framework-api-1.7.0.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/logback-core-1.2.3.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/log4j-over-slf4j-1.7.25.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/slf4j-api-1.7.25.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/logback-classic-1.2.3.jar:/opt/nifi-1.7.0/nifi-1.7.0/./lib/nifi-runtime-1.7.0.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx1024m -Xms1024m -Djavax.security.auth.useSubjectCredsOnly=true -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -XX:+UseG1GC -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi-1.7.0/nifi-1.7.0/./conf/nifi.properties -Dnifi.bootstrap.listen.port=9001 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi-1.7.0/nifi-1.7.0/logs org.apache.nifi.NiFi
b) Version of NiFi:
Apache NiFi - 1.7.0
It appears that the custom
*.sh script is failing due to that error, which tells NiFi to route to non-zero status output. NiFi doesn't monitor the console for output messages to determine error status, so if this condition should not cause failure, you will need to change the script to indicate that.
The bulletins visible on the
ExecuteStreamCommand processor in your screenshot are
INFO messages about routing the flowfile to the
original relationship. They are not errors.
Not only NiFi to route to non-zero status output in this case. It is failing to give expected results for that code statement.
Case1 : Analysis of execution of .sh from command line
CodeStatement1-Executed and gave expected results
ERROR on screen - still continuing execution
CodeStatement2-Executed and gave expected resulsts
Case2: Analysis of execution of .sh from nifi
Code is not executing hence no expected output
My observation : NiFi is clearing execution result/status once it prompts for ERROR
We need to make sure "ERROR KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! " should't prompt in screen, it is fine if we set a temporary property from hdfs/hive/spark-scala/.sh/nifi .
Without the code of your script (in conjunction with the configuration of your environment), we have no way to determine what is happening. You can look at the code of
ExecuteStreamCommand to see how it executes the command -- here is where it runs it, and here is where it extracts the error stream to an attribute. What does examining this attribute tell you about what is happening in the script? I'd also recommend turning on low-level debugging for this processor by modifying the
$NIFI_HOME/conf/logback.xml file to contain this line:
<logger name="org.apache.nifi.processors.standard.ExecuteStreamCommand" level="DEBUG"/>