Support Questions
Find answers, ask questions, and share your expertise

Is there a way to make Falcon features work with an HDFS Compatible file system

Solved Go to solution
Highlighted

Is there a way to make Falcon features work with an HDFS Compatible file system

The scenario I have is a HDFS cluster, and a separate Object Store. The Object store provides an HDFS Compatible File System jar that I can use from the native HDFS Cluster to read/write to the object store. However, referencing the Object store requires using a distinct URL

i.e. For the HDFS Cluster I use

hdfs dfs -ls hdfs://<namenode>:8020/...

But for the Object store I have to use a custom url

hdfs dfs -ls vipers://<namenode>:8020/...

If I define a falcon mirroring job on the HDFS Cluster, but using paths on the HDFS Object store, then I get URL Exceptions thrown when the job is submitted. This is because the path URI is appended to the Cluster URI....

What I think is happening is as follows:

hdfs://<namenode>:8020/ can only be used to address the HDFS Cluster Files and blocks. And, while I can make the

cluster access a third party HDFS compatible Object store, there is no way to make a cluster name node 'proxy'

for the Object store itself. Is that right?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Is there a way to make Falcon features work with an HDFS Compatible file system

Explorer

Responding from Hortonworks Product Mgmt- currently, we only support native HDFS clusters as the source/destination in Falcon (in addition to S3/Azure). There is no support for Hadoop Compatible File-system (such as EMC ECS), though we are getting the requests from various channels. This will be explored as a future item though we are yet to arrive at a timeline.

View solution in original post

3 REPLIES 3
Highlighted

Re: Is there a way to make Falcon features work with an HDFS Compatible file system

Rising Star

If your cluster endpoint is ponting to HDFS, then the feed locations will be based on that that unless they are absolute path. Can you provide an example of what you are trying to do and the exceptions that you are getting.

Thanks

Re: Is there a way to make Falcon features work with an HDFS Compatible file system

Explorer

Responding from Hortonworks Product Mgmt- currently, we only support native HDFS clusters as the source/destination in Falcon (in addition to S3/Azure). There is no support for Hadoop Compatible File-system (such as EMC ECS), though we are getting the requests from various channels. This will be explored as a future item though we are yet to arrive at a timeline.

View solution in original post

Highlighted

Re: Is there a way to make Falcon features work with an HDFS Compatible file system

Thanks Saumitra