<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Is it possible to use S3 for Falcon feeds? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137967#M39799</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12920/liammurphy2.html" nodeid="12920"&gt;@Liam Murphy&lt;/A&gt;: Can you attach the Feed xml and Falcon and oozie logs? Looks like eviction is failing. Can you see if the replication succeeded? Oozie bundle created will have one for retention and another for replication. Thanks!&lt;/P&gt;</description>
    <pubDate>Fri, 09 Sep 2016 02:31:12 GMT</pubDate>
    <dc:creator>sramesh</dc:creator>
    <dc:date>2016-09-09T02:31:12Z</dc:date>
    <item>
      <title>Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137959#M39791</link>
      <description>&lt;P&gt;I have not seen any example of using s3 in Falcon except for mirroring. Is it possible to use an S3-bucket as location path for a feed? &lt;/P&gt;</description>
      <pubDate>Tue, 06 Sep 2016 17:29:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137959#M39791</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-06T17:29:00Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137960#M39792</link>
      <description>&lt;P&gt;Document exists for wasb &lt;A href="http://falcon.apache.org/DataReplicationAzure.html" target="_blank"&gt;http://falcon.apache.org/DataReplicationAzure.html&lt;/A&gt;, may be just use s3a instead.&lt;/P&gt;</description>
      <pubDate>Wed, 07 Sep 2016 06:11:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137960#M39792</guid>
      <dc:creator>smishra1</dc:creator>
      <dc:date>2016-09-07T06:11:07Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137961#M39793</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12920/liammurphy2.html" nodeid="12920"&gt;@Liam Murphy&lt;/A&gt;: Please find the details below&lt;/P&gt;&lt;P&gt;1&amp;gt; Ensure that you have an Account with Amazon S3 and a designated bucket for your data&lt;/P&gt;&lt;P&gt;2&amp;gt; You must have an Access Key ID and a Secret Key&lt;/P&gt;&lt;P&gt;3&amp;gt; Configure HDFS for S3 storage by making the following changes to core-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;property&amp;gt; 
&amp;lt;name&amp;gt;fs.default.name&amp;lt;/name&amp;gt; 
&amp;lt;value&amp;gt;s3n://your-bucket-name&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;

&amp;lt;property&amp;gt; 
&amp;lt;name&amp;gt;fs.s3n.awsAccessKeyId&amp;lt;/name&amp;gt; 
&amp;lt;value&amp;gt;YOUR_S3_ACCESS_KEY&amp;lt;/value&amp;gt;&amp;lt;/property&amp;gt;

&amp;lt;property&amp;gt; 
&amp;lt;name&amp;gt;fs.s3n.awsSecretAccessKey&amp;lt;/name&amp;gt;   
&amp;lt;value&amp;gt; YOUR_S3_SECRET_KEY &amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;4&amp;gt;In the falcon feed.xml, specify the Amazon S3 location and schedule the feed&lt;/P&gt;&lt;PRE&gt;&amp;lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&amp;gt;
&amp;lt;feed name="S3Replication" description="S3-Replication" xmlns="uri:falcon:feed:0.1"&amp;gt;    
&amp;lt;frequency&amp;gt;
hours(1)
&amp;lt;/frequency&amp;gt;    
&amp;lt;clusters&amp;gt;        
&amp;lt;cluster name="cluster1" type="source"&amp;gt;            
&amp;lt;validity start="2016-09-01T00:00Z" end="2034-12-20T08:00Z"/&amp;gt;            
&amp;lt;retention limit="days(24)" action="delete"/&amp;gt;       
&amp;lt;/cluster&amp;gt;        
&amp;lt;cluster name="cluster2" type="target"&amp;gt;            
&amp;lt;validity start="2016-09-01T00:00Z" end="2034-12-20T08:00Z"/&amp;gt;           
&amp;lt;retention limit="days(90)" action="delete"/&amp;gt;            
&amp;lt;locations&amp;gt;                
&amp;lt;location type="data" path="s3://&amp;lt;bucket-name&amp;gt;/&amp;lt;path-folder&amp;gt;/${YEAR}-${MONTH}-${DAY}-${HOUR}/"/&amp;gt;            
&amp;lt;/locations&amp;gt;        
&amp;lt;/cluster&amp;gt;     
&amp;lt;/clusters&amp;gt;
&lt;/PRE&gt;</description>
      <pubDate>Wed, 07 Sep 2016 06:49:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137961#M39793</guid>
      <dc:creator>sramesh</dc:creator>
      <dc:date>2016-09-07T06:49:18Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137962#M39794</link>
      <description>&lt;P&gt;I see that Sowmya already answered. Yes, we can specify S3 as the source/destination cluster(s) with paths (we support Azure as well). Here is a Falcon screenshot.&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/7397-falcon1.png"&gt;falcon1.png&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 07 Sep 2016 07:00:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137962#M39794</guid>
      <dc:creator>sburagohain</dc:creator>
      <dc:date>2016-09-07T07:00:05Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137963#M39795</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Thanks for that Sowyma,&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;This is definitely trying to do something! But I now see an exception in oozie logs which says &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;160908110420441-oozie-oozi-W] ACTION[0000034-160908110420441-oozie-oozi-W@eviction] Launcher exception: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain&lt;/P&gt;&lt;P&gt;org.apache.oozie.action.hadoop.JavaMainException: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:59)&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)&lt;/P&gt;&lt;P&gt;..&lt;/P&gt;</description>
      <pubDate>Thu, 08 Sep 2016 18:35:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137963#M39795</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-08T18:35:09Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137964#M39796</link>
      <description>&lt;P&gt;Full exception in oozie log is as follows:&lt;/P&gt;&lt;P&gt;org.apache.oozie.action.hadoop.JavaMainException: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:59)&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:35)&lt;/P&gt;&lt;P&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;/P&gt;&lt;P&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;/P&gt;&lt;P&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;/P&gt;&lt;P&gt;at java.lang.reflect.Method.invoke(Method.java:606)&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)&lt;/P&gt;&lt;P&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;/P&gt;&lt;P&gt;at javax.security.auth.Subject.doAs(Subject.java:415)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)&lt;/P&gt;&lt;P&gt;Caused by: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain&lt;/P&gt;&lt;P&gt;at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)&lt;/P&gt;&lt;P&gt;at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)&lt;/P&gt;&lt;P&gt;at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)&lt;/P&gt;&lt;P&gt;at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.hadoop.HadoopClientFactory$1.run(HadoopClientFactory.java:200)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.hadoop.HadoopClientFactory$1.run(HadoopClientFactory.java:198)&lt;/P&gt;&lt;P&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;/P&gt;&lt;P&gt;at javax.security.auth.Subject.doAs(Subject.java:415)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.hadoop.HadoopClientFactory.createFileSystem(HadoopClientFactory.java:198)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.hadoop.HadoopClientFactory.createProxiedFileSystem(HadoopClientFactory.java:153)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.hadoop.HadoopClientFactory.createProxiedFileSystem(HadoopClientFactory.java:145)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.entity.FileSystemStorage.fileSystemEvictor(FileSystemStorage.java:317)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.entity.FileSystemStorage.evict(FileSystemStorage.java:300)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.retention.FeedEvictor.run(FeedEvictor.java:76)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)&lt;/P&gt;&lt;P&gt;at org.apache.falcon.retention.FeedEvictor.main(FeedEvictor.java:52)&lt;/P&gt;&lt;P&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;/P&gt;&lt;P&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;/P&gt;&lt;P&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;/P&gt;&lt;P&gt;at java.lang.reflect.Method.invoke(Method.java:606)&lt;/P&gt;&lt;P&gt;at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:56)&lt;/P&gt;&lt;P&gt;... 15 more&lt;/P&gt;&lt;P&gt;I have defined fs.s3a.acccess.key, fs.s3a.secret.key, fs.s3a.endpoint in hdfs-stite.xml. I can use hdfs dfs -ls s3a://&amp;lt;my-buckt&amp;gt; from the command line, and it works. I've also set the path in the feed example to be s3a://&amp;lt;my-bucket&amp;gt;...&lt;/P&gt;&lt;P&gt;But this exception would seem to day oozie can't see the AWS access/secret key from some reason?&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Liam&lt;/P&gt;</description>
      <pubDate>Thu, 08 Sep 2016 19:23:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137964#M39796</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-08T19:23:47Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137965#M39797</link>
      <description>&lt;P&gt;If you are using multiple clusters, you need to make sure that the hadoop configuration that Oozie uses for the target cluster (see oozie.service.HadoopAccessorService.hadoop.configurations property in oozie-site.xml) is correctly configured.   By default in a single cluster environment, Oozie will point to the local core-site.xml for this by default&lt;/P&gt;</description>
      <pubDate>Fri, 09 Sep 2016 01:14:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137965#M39797</guid>
      <dc:creator>vranganathan</dc:creator>
      <dc:date>2016-09-09T01:14:00Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137966#M39798</link>
      <description>&lt;P&gt;Hi Venkat,&lt;/P&gt;&lt;P&gt;The property is set to *=/etc/hadoop/conf. This is just a simple single node cluster (HDP 2.3 sandbox). The s3a properties have been added to both core and hdfs site files, but still the same problem I'm afraid.&lt;/P&gt;</description>
      <pubDate>Fri, 09 Sep 2016 02:24:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137966#M39798</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-09T02:24:44Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137967#M39799</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12920/liammurphy2.html" nodeid="12920"&gt;@Liam Murphy&lt;/A&gt;: Can you attach the Feed xml and Falcon and oozie logs? Looks like eviction is failing. Can you see if the replication succeeded? Oozie bundle created will have one for retention and another for replication. Thanks!&lt;/P&gt;</description>
      <pubDate>Fri, 09 Sep 2016 02:31:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137967#M39799</guid>
      <dc:creator>sramesh</dc:creator>
      <dc:date>2016-09-09T02:31:12Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137968#M39800</link>
      <description>&lt;P&gt;Hi Sowmya,&lt;/P&gt;&lt;P&gt;Attached file contains the feed definition, falcon and oozie logs. I submitted and scheduled the feed around the  14:40&lt;/P&gt;&lt;P&gt;timestamp&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/7534-file.tar.gz"&gt;file.tar.gz&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Thanks for your help&lt;/P&gt;&lt;P&gt;Liam&lt;/P&gt;</description>
      <pubDate>Mon, 12 Sep 2016 17:14:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137968#M39800</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-12T17:14:19Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137969#M39801</link>
      <description>&lt;P&gt;Hi Sowmya,&lt;/P&gt;&lt;P&gt;Is there another debug information I can provide to help solve the cause of the problem?&lt;/P&gt;&lt;P&gt;Kind Regards,&lt;/P&gt;&lt;P&gt;Liam &lt;/P&gt;</description>
      <pubDate>Thu, 15 Sep 2016 18:49:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137969#M39801</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-15T18:49:04Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137970#M39802</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12920/liammurphy2.html" nodeid="12920"&gt;@Liam Murphy&lt;/A&gt;: In Oozie log I can see that replication paths don't exist. Can you make sure files exist ?&lt;/P&gt;&lt;P&gt;Eviction fails because of credentials issue. Can you make sure core-site and hdfs-site has the required configs and restart the services and resubmit the feed? Thanks!&lt;/P&gt;&lt;PRE&gt;2016-09-09 14:44:43,680  INFO CoordActionInputCheckXCommand:520 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000058-160909120521096-oozie-oozi-C] ACTION[0000058-160909120521096-oozie-oozi-C@10] [0000058-160909120521096-oozie-oozi-C@10]::ActionInputCheck:: File:hftp://192.168.39.108:50070/falcon/2016-09-09-01, Exists? :false
2016-09-09 14:44:43,817  INFO CoordActionInputCheckXCommand:520 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000058-160909120521096-oozie-oozi-C] ACTION[0000058-160909120521096-oozie-oozi-C@11] [0000058-160909120521096-oozie-oozi-C@11]::CoordActionInputCheck:: Missing deps:hftp://192.168.39.108:50070/falcon/2016-09-09-01 
2016-09-09 14:44:43,818  INFO CoordActionInputCheckXCommand:520 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000058-160909120521096-oozie-oozi-C] ACTION[0000058-160909120521096-oozie-oozi-C@11] [0000058-160909120521096-oozie-oozi-C@11]::ActionInputCheck:: In checkListOfPaths: hftp://192.168.39.108:50070/falcon/2016-09-09-01 is Missing.&lt;/PRE&gt;</description>
      <pubDate>Fri, 16 Sep 2016 01:41:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137970#M39802</guid>
      <dc:creator>sramesh</dc:creator>
      <dc:date>2016-09-16T01:41:06Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137971#M39803</link>
      <description>&lt;P&gt;I just noticed that when a path does not exist for a given hour falcon/oozie just get stuck!.. rather than check for the next hour? My misunderstanding I guess. Have got it working now. &lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2016 22:48:21 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137971#M39803</guid>
      <dc:creator>liam_murphy2</dc:creator>
      <dc:date>2016-09-16T22:48:21Z</dc:date>
    </item>
    <item>
      <title>Re: Is it possible to use S3 for Falcon feeds?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137972#M39804</link>
      <description>&lt;P&gt;Hi Team / &lt;A rel="user" href="https://community.cloudera.com/users/377/sramesh.html" nodeid="377"&gt;@Sowmya Ramesh&lt;/A&gt;, I am trying to use falcon to replicate HDFS to S3. I have tried above steps and I see the HDFStoS3 replication Job status KILLED after 
running the workflow. After launching Oozie, I can see the workflow 
changing status from RUNNING to KILLED. Is there a way to troubleshoot. I
 can run hadoop fs -ls commands on my s3 bucket so definitely got 
access. I suspect its the s3 URL. I tried downloading the xml changing 
the URL without the s3.region.amazonaws.com and uploading with no luck. Any other suggestions. Appreciate 
all your help/support in advance. Regards&lt;/P&gt;&lt;P&gt;Anil&lt;/P&gt;</description>
      <pubDate>Tue, 28 Nov 2017 17:19:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Is-it-possible-to-use-S3-for-Falcon-feeds/m-p/137972#M39804</guid>
      <dc:creator>khanil</dc:creator>
      <dc:date>2017-11-28T17:19:38Z</dc:date>
    </item>
  </channel>
</rss>

