Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Is there a way to make Falcon features work with an HDFS Compatible file system

avatar

The scenario I have is a HDFS cluster, and a separate Object Store. The Object store provides an HDFS Compatible File System jar that I can use from the native HDFS Cluster to read/write to the object store. However, referencing the Object store requires using a distinct URL

i.e. For the HDFS Cluster I use

hdfs dfs -ls hdfs://<namenode>:8020/...

But for the Object store I have to use a custom url

hdfs dfs -ls vipers://<namenode>:8020/...

If I define a falcon mirroring job on the HDFS Cluster, but using paths on the HDFS Object store, then I get URL Exceptions thrown when the job is submitted. This is because the path URI is appended to the Cluster URI....

What I think is happening is as follows:

hdfs://<namenode>:8020/ can only be used to address the HDFS Cluster Files and blocks. And, while I can make the

cluster access a third party HDFS compatible Object store, there is no way to make a cluster name node 'proxy'

for the Object store itself. Is that right?

1 ACCEPTED SOLUTION

avatar
Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
3 REPLIES 3

avatar
Expert Contributor

If your cluster endpoint is ponting to HDFS, then the feed locations will be based on that that unless they are absolute path. Can you provide an example of what you are trying to do and the exceptions that you are getting.

Thanks

avatar
Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar

Thanks Saumitra