Support Questions
Find answers, ask questions, and share your expertise

[FALCON] : force delete feed

Explorer

Hi,

do you know how to force delete a feed for some reasons the directory of feed was deleted and now it unable to delete feed

<cluster xmlns='uri:falcon:cluster:0.1' name='next-rec-cluster' description='undefined' colo='nextRecColo'>
  <interfaces>
    <interface type='readonly' endpoint='hftp://clusterA002:50070' version='2.2.0'/>
    <interface type='write' endpoint='hdfs://clusterA002:8020' version='2.2.0'/>
    <interface type='execute' endpoint='clusterA002:8050' version='2.2.0'/>
    <interface type='workflow' endpoint='http://clusterA003:11000/oozie/' version='4.0.0'/>
    <interface type='messaging' endpoint='tcp://clusterA003:61616?daemon=true' version='5.1.6'/>
  </interfaces>
  <locations>
    <location name='staging' path='/apps/falcon/next-rec-cluster/staging'/>
    <location name='temp' path='/apps/falcon/tmp'/>
    <location name='working' path='/apps/falcon/next-rec-cluster/working'/>
  </locations>
  <ACL owner='falcon' group='hadoop' permission='0755'/>
  <properties>
    <property name='dfs.namenode.kerberos.principal' value='nn/_HOST@FTI.NET'/>
    <property name='hive.metastore.kerberos.principal' value='hive/_HOST@FTI.NET'/>
     <property name='queueName' value='oozie-launcher'/>
    <property name="hive.metastore.sasl.enabled" value="true"/>
  </properties>
</cluster>

$ falcon entity -delete -type feed -name next-vers-current
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ERROR: Bad Request;default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Wrong FS: hdfs://clusterA001/apps/falcon/current-rec-cluster/staging/falcon/workflows/feed/next-vers-current/051001d36ac9092827ba3f011ccf1c35_1464596058207, expected hdfs://clusterA002

java.lang.IllegalArgumentException: Wrong FS: hdfs://clusterA001/apps/falcon/current-rec-cluster/staging/falcon/workflows/feed/next-vers-current/051001d36ac9092827ba3f011ccf1c35_1464596058207, expected: hdfs://clusterA002:8020
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.j
8 REPLIES 8

Re: [FALCON] : force delete feed

Explorer

@mayki wogno

You should be able to delete an entity even if you accidentally delete its staging directory. From the error log, somehow it is trying to delete/access a path in clusterA001 instead of clusterA002, which results in the checkPath error (i.e. the path starting with hdfs://clusterA001 doesn't belong to the file system clusterA002). Could you attach your feed definition and falcon server log (so I can see the full stacktrace)? Thanks.

Re: [FALCON] : force delete feed

Explorer

@yzheng, log falcon is already in my post

the feed is simple retention feed :

<?xml version="1.0" encoding="UTF-8"?>
<feed description="HDFS directory autocleaning" name="autoclean-next" xmlns="uri:falcon:feed:0.1">

        <frequency>minutes(30)</frequency>

        <clusters>
                <cluster name="next-rec-cluster">
                        <validity start="2016-05-27T12:00Z" end="2016-05-27T23:00Z"/>
                        <retention limit="hours(1)" action="delete"/>
                </cluster>
        </clusters>

        <locations>
                <location type="data" path="/tmp/falcon/next-vers-current/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
        </locations>

        <ACL owner="falcon" group="hadoop" permission="0x644"/>
        <schema location="/none" provider="none" />
        <properties><property name="queueName" value="oozie-launcher"/></properties>
</feed>

the question is why checkPath is not corresponding like declaration on cluster entity.

Re: [FALCON] : force delete feed

Explorer

The error log is not complete. It ends at "at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.j". You should see more stacktraces, sometimes containing "caused by ...". Could you share the complete error as well as some info logs right before the error? Thank you.

Re: [FALCON] : force delete feed

Explorer

here full log :

2016-06-02 08:43:49,420 INFO  - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:falcon:DELETE//entities/delete/feed/autoclean-next] ~ Creating FS for the login user falcon, impersonation not required (HadoopClientFactory:191)
2016-06-02 08:43:49,421 ERROR - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:falcon:DELETE//entities/delete/feed/autoclean-next] ~ Unable to reach workflow engine for deletion or deletion failed (AbstractEntityManager:266)
java.lang.IllegalArgumentException: Wrong FS: hdfs://clusterA001:8020/apps/falcon/next-rec-cluster/staging/falcon/workflows/feed/autoclean-next/086f6f548ea35db4916ce9f97a699edd_1464358369774, expected: hdfs://clusterA002:8020
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
        at org.apache.falcon.entity.EntityUtil.isStagingPath(EntityUtil.java:639)
        at org.apache.falcon.workflow.engine.OozieWorkflowEngine.findBundles(OozieWorkflowEngine.java:294)
        at org.apache.falcon.workflow.engine.OozieWorkflowEngine.doBundleAction(OozieWorkflowEngine.java:377)
        at org.apache.falcon.workflow.engine.OozieWorkflowEngine.doBundleAction(OozieWorkflowEngine.java:371)
        at org.apache.falcon.workflow.engine.OozieWorkflowEngine.delete(OozieWorkflowEngine.java:355)
        at org.apache.falcon.resource.AbstractEntityManager.delete(AbstractEntityManager.java:253)
        at org.apache.falcon.resource.ConfigSyncService.delete(ConfigSyncService.java:62)
        at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.falcon.resource.channel.IPCChannel.invoke(IPCChannel.java:49)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$3.doExecute(SchedulableEntityManagerProxy.java:230)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$EntityProxy.execute(SchedulableEntityManagerProxy.java:577)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$3.execute(SchedulableEntityManagerProxy.java:219)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.delete_aroundBody2(SchedulableEntityManagerProxy.java:232)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$AjcClosure3.run(SchedulableEntityManagerProxy.java:1)
        at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149)
        at org.apache.falcon.aspect.AbstractFalconAspect.logAroundMonitored(AbstractFalconAspect.java:51)
        at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.delete(SchedulableEntityManagerProxy.java:206)
        at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
        at org.apache.falcon.security.FalconAuthorizationFilter.doFilter(FalconAuthorizationFilter.java:108)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        at org.apache.falcon.security.FalconAuthenticationFilter$2.doFilter(FalconAuthenticationFilter.java:188)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615)
        at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:574)
        at org.apache.falcon.security.FalconAuthenticationFilter.doFilter(FalconAuthenticationFilter.java:197)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        at org.apache.falcon.security.FalconAuditFilter.doFilter(FalconAuditFilter.java:64)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        at org.apache.falcon.security.HostnameFilter.doFilter(HostnameFilter.java:82)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2016-06-02 08:43:49,422 ERROR - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:falcon:DELETE//entities/delete/feed/autoclean-next] ~ Action failed: Bad Request
Error: Wrong FS: hdfs://clusterA001:8020/apps/falcon/next-rec-cluster/staging/falcon/workflows/feed/autoclean-next/086f6f548ea35db4916ce9f97a699edd_1464358369774, expected: hdfs://clusterA002:8020 (FalconWebException:83)
2016-06-02 08:43:49,422 ERROR - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:falcon:DELETE//entities/delete/feed/autoclean-next] ~ Action failed: Bad Request
Error: default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Wrong FS: hdfs://clusterA001:8020/apps/falcon/next-rec-cluster/staging/falcon/workflows/feed/autoclean-next/086f6f548ea35db4916ce9f97a699edd_1464358369774, expected: hdfs://clusterA002:8020
 (FalconWebException:83)
2016-06-02 08:43:49,422 INFO  - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:falcon:DELETE//entities/delete/feed/autoclean-next] ~ {Action:delete, Dimensions:{colo=NULL, entityType=feed, entityName=autoclean-next}, Status: FAILED, Time-taken:67064808 ns} (METRIC:38)
2016-06-02 08:43:49,423 DEBUG - [1741780505@qtp-794075965-95 - 90a65b37-a201-422b-b5a8-aa3d0733f30e:] ~ Audit: falcon/10.98.138.87 performed request http://clusterA003:15000/api/entities/delete/feed/autoclean-next (10.98.138.87) at time 2016-06-02T06:43Z (FalconAuditFilter:86)

Re: [FALCON] : force delete feed

Explorer

From your log, it looks like you may have run this entity with app path on clusterA001 at oozie server http://clusterA003:11000/oozie/ before. This could happen if you changed the cluster endpoint after your job running. Could you check:

1. At oozie server http://clusterA003:11000/oozie/, find all bundle jobs with name FALCON_FEED_autoclean-next. Check app-path (you can find it in bundle job definition).

2. Could you run CLI command to see if the entity definitions for next-rec-cluster and autoclean-next are the same as you posted above?

Re: [FALCON] : force delete feed

Explorer

@yzheng : you are rigth for another i've run the feed from hdfs://clusterA001 on oozie and i've changed the cluster endpoint to hdfs://clusterA002 and submit again the same feed.

So Now how delete this feed ?

Re: [FALCON] : force delete feed

Explorer

One way you can do is to delete bundle jobs with app path clusterA001 at Oozie. Oozie provides PurgeService. You can set this parameter oozie.service.PurgeService.bundle.older.than to ask Oozie to delete jobs older than a given time. You need to restart Oozie after changing this parameter.

Re: [FALCON] : force delete feed

Explorer

@yzheng : here all definitions

$ falcon entity -type feed -name autoclean-next -definition
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<feed name="autoclean-next" description="HDFS directory autocleaning" xmlns="uri:falcon:feed:0.1">
    <frequency>minutes(30)</frequency>
    <timezone>UTC</timezone>
    <clusters>
        <cluster name="next-rec-cluster">
            <validity start="2016-05-27T12:00Z" end="2016-05-27T23:00Z"/>
            <retention limit="hours(1)" action="delete"/>
        </cluster>
    </clusters>
    <locations>
        <location type="data" path="/tmp/falcon/next-vers-current/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
    </locations>
    <ACL owner="falcon" group="hadoop" permission="0x644"/>
    <schema location="/none" provider="none"/>
    <properties>
        <property name="queueName" value="oozie-launcher"/>
    </properties>
</feed>

$ falcon entity -type cluster -name next-rec-cluster -definition
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cluster name="next-rec-cluster" description="undefined" colo="nextRecColo" xmlns="uri:falcon:cluster:0.1">
    <interfaces>
        <interface type="readonly" endpoint="hftp://clusterA002:50070" version="2.2.0"/>
        <interface type="write" endpoint="hdfs://clusterA002:8020" version="2.2.0"/>
        <interface type="execute" endpoint="clusterA002:8050" version="2.2.0"/>
        <interface type="workflow" endpoint="http://clusterA003:11000/oozie/" version="4.0.0"/>
        <interface type="messaging" endpoint="tcp://clusterA003:61616?daemon=true" version="5.1.6"/>
    </interfaces>
    <locations>
        <location name="staging" path="/apps/falcon/next-rec-cluster/staging"/>
        <location name="temp" path="/apps/falcon/tmp"/>
        <location name="working" path="/apps/falcon/next-rec-cluster/working"/>
    </locations>
    <ACL owner="falcon" group="hadoop" permission="0755"/>
    <properties>
        <property name="dfs.namenode.kerberos.principal" value="nn/_HOST@FTI.NET"/>
        <property name="hive.metastore.kerberos.principal" value="hive/_HOST@FTI.NET"/>
        <property name="queueName" value="oozie-launcher"/>
        <property name="hive.metastore.sasl.enabled" value="true"/>
    </properties>
</cluster>

$ falcon entity -type feed -name autoclean-next -delete
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ERROR: Bad Request;default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Wrong FS: hdfs://clusterA001:8020/apps/falcon/next-rec-cluster/staging/falcon/workflows/feed/autoclean-next/086f6f548ea35db4916ce9f97a699edd_1464358369774, expected: hdfs://clusterA002:8020