Member since
08-17-2016
45
Posts
21
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2350 | 09-05-2018 09:20 PM | |
1855 | 06-29-2017 06:50 PM | |
10764 | 02-28-2017 07:12 PM | |
2241 | 11-11-2016 01:57 AM |
10-08-2018
09:45 PM
@Zhen Zeng To create an HDF cluster with Cloudbreak, a KDC must be configured, unless you have registered an LDAP in Cloudbreak and select that when creating the cluster. During cluster creation, did you use test a KDC or an existing KDC? For configuring Cloudbreak to create a cluster that uses a KDC, please refer to the Enable Kerberos documentation for Cloudbreak 2.7. For complete instructions to create an HDF cluster with Cloudbreak, please refer to the Cloudbreak 2.7 documentation for Creating HDF Clusters.
... View more
09-06-2018
09:03 PM
I apologize that I couldn't think of a workaround, and that you'll have to set "Permission umask" for each processor. After NIFI-5575 is resolved, it will be included in a future HDF release, and you should be able to update your flow to remove the specific settings in each processor.
... View more
09-05-2018
09:20 PM
@Kei
Miyauchi
With core-site.xml and hdfs-site.xml being provided in the "Hadoop Configuration Resources" property, that config is passed to the hadoop client that PutHDFS uses to send data to HDFS. However, in the code it looks like if the "Permissions umask" property is not set, then PutHDFS will use a default umask of "18", which is pulled from FsPermission.java from hadoop-common. Unfortunately, I don't think there's a workaround. The "Permissions umask" property doesn't support EL, so for now you would have to set the umask explicitly via the property. I created bug NIFI-5575 to track the issue.
... View more
07-12-2018
06:07 PM
@Bob T Is the /usr/lib/hdinsight-datalake directory itself readable/executable by the user running NiFi? Without a specific FACL for the hdinsight-datalake directory, the user running NiFi needs to have read/execute permission on each dir in the path and read permission files in that dir to be able to access the JARs. I see the permissions on the JARs are wide open, but can you confirm read/execute on the directories?
... View more
07-12-2018
05:00 PM
@Bob T
I think HdlAdiFileSystem was renamed in the version of hadoop-azure-datalake-2.7.3.2.6.5.8-7.jar you are using. Try updating the fs.adl.impl and fs.AbstractFileSystem.adl.impl values in core-site.xml: <property>
<name>fs.adl.impl</name>
<value>org.apache.hadoop.fs.adl.AdlFileSystem</value>
</property>
<property>
<name>fs.AbstractFileSystem.adl.impl</name>
<value>org.apache.hadoop.fs.adl.Adl</value>
</property>
... View more
07-11-2018
09:01 PM
@Bob T
Could you please put stack traces inside of code blocks to make them a bit easier to read? It looks like you are still having classpath problems. Assuming that NiFi's lib directory is now restored to how it is from a "vanilla" install, I would check to make sure that you have the proper versions of the additional jars you're adding that work with Hadoop 2.7.3. That's the version of hadoop-client that is used by NiFi 1.5. It might help if you also (using code blocks) comment with a listing of the nifi/lib dir, the /usr/lib/hdinsight-datalake dir, and the contents of (or a link to) the xml files you've listed in "Hadoop Configuration Resources", sanitized of any information you don't want to post publicly. 🙂
... View more
07-11-2018
08:02 PM
@Bob T The link you included has instructions to put those jars in /usr/lib/hdinsight-datalake, and then in the processor configuration for FetchHDFS, set the property "Additional Classpath Resources" to "/usr/lib/hdinsight-datalake". You don't have to use those specific directories, but they must be in a directory that NiFi can read, and NiFi needs to have read permissions on each jar. Also, please remove the jars you added to NiFi's lib directory. Adding jars directly to NiFi's lib directory can break NiFi itself, or some of its components. It has to do with how classloaders are created so that different NiFi components can use the versions of dependencies that they need. If jars are placed directly in NiFi's lib directory, it may override a dependency for a component and cause it to fail. Could you please perform those steps and try running the flow again?
... View more
10-18-2017
05:21 PM
@Saikrishna Tarapareddy, you may want to create a process group that contains the instantiation of your template, and then create connections from the areas of your flow to that process group. That way, you have one "instance" of the template created, and you'll only need to do your modifications once. You can always save a new template (for other instantiations, exporting, etc) with your modifications. I do admit that making connections across process groups in order to reuse a specific group may make the flow a bit harder to read, but eventually NiFi will support some improvements to make this easier/cleaner to do in a flow.
... View more
07-11-2017
06:47 PM
@Ian Neethling Could you provide a bit more about how you've installed NiFi? Are you running HDF, or just NiFi by itself? Did you install from an RPM, or did you install from a tar/gzip?
... View more
06-29-2017
06:50 PM
1 Kudo
@dhieru singh You'll need run-nifi.bat to be run as a service, or be able to run when the user is not logged on. This answer from a user on serverfault.com has some in-depth instructions on how to set up running a batch file with the Task Scheduler.
... View more