Created 06-27-2016 09:04 AM
Hi all,
what's wrong in this feed and cluster entities ?
FALCON server is run on clusterA003.
The feed run a replication data from clusterB001 (source) to clusterA001 (target)
<cluster xmlns='uri:falcon:cluster:0.1' name='nextCur001' description='cluster Next sur Master001' colo='nextCur'> <interfaces> <interface type='readonly' endpoint='hftp://clusterA001:50070' version='2.2.0'/> <interface type='write' endpoint='hdfs://clusterA001:8020' version='2.2.0'/> <interface type='execute' endpoint='clusterA001:8050' version='2.2.0'/> <interface type='workflow' endpoint='http://clusterB003:11000/oozie/' version='4.0.0'/> <interface type='messaging' endpoint='tcp://clusterA003:61616?daemon=true' version='5.1.6'/> </interfaces> <locations> <location name='staging' path='/apps/falcon/bigdata-next-cluster/staging'/> <location name='temp' path='/apps/falcon/tmp'/> <location name='working' path='/apps/falcon/bigdata-next-cluster/working'/> </locations> <ACL owner='falcon' group='hadoop' permission='0755'/> <properties> <property name='dfs.namenode.kerberos.principal' value='nn/_HOST@FTI.NET'/> <property name='hive.metastore.kerberos.principal' value='hive/_HOST@FTI.NET'/> <property name="hive.metastore.uris" value="thrift://clusterA003:9083"/> <property name='queueName' value='oozie-launcher'/> <property name="hive.metastore.sasl.enabled" value="true"/> </properties> </cluster>
<cluster xmlns='uri:falcon:cluster:0.1' name='curNext001' description='undefined' colo='nextCur'> <interfaces> <interface type='readonly' endpoint='hftp://clusterB001:50070' version='2.2.0'/> <interface type='write' endpoint='hdfs://clusterB001:8020' version='2.2.0'/> <interface type='execute' endpoint='clusterB001:8050' version='2.2.0'/> <interface type='workflow' endpoint='http://clusterB003:11000/oozie/' version='4.0.0'/> <interface type='messaging' endpoint='tcp://clusterA003:61616?daemon=true' version='5.1.6'/> </interfaces> <locations> <location name='staging' path='/apps/falcon/bigdata-current-cluster/staging'/> <location name='temp' path='/apps/falcon/tmp'/> <location name='working' path='/apps/falcon/bigdata-current-cluster/working'/> </locations> <ACL owner='falcon' group='hadoop' permission='0755'/> <properties> <property name='dfs.namenode.kerberos.principal' value='nn/_HOST@FTI.NET'/> <property name='hive.metastore.kerberos.principal' value='hive/_HOST@FTI.NET'/> <property name="hive.metastore.uris" value="thrift://clusterB003:9083"/> <property name='queueName' value='oozie-launcher'/> <property name="hive.metastore.sasl.enabled" value="true"/> </properties> </cluster>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <feed name="curVersNext001" description="next-vers-current master001" xmlns="uri:falcon:feed:0.1"> <frequency>hours(2)</frequency> <timezone>UTC</timezone> <late-arrival cut-off="hours(1)"/> <clusters> <cluster name="curNext001" type="source"> <validity start="2016-06-01T12:00Z" end='2016-06-30T11:00Z'/> <retention limit="days(6)" action="delete"/> <locations> <location type="data" path="/tmp/falcon/next-vers-current/${YEAR}-${MONTH}-${DAY}-${HOUR}"/> </locations> </cluster> <cluster name="nextCur001" type="target"> <validity start="2016-06-01T13:00Z" end="2016-06-30T11:00Z"/> <retention limit="days(6)" action="delete"/> <locations> <location type="data" path="/tmp/falcon/next-vers-current/${YEAR}-${MONTH}-${DAY}-${HOUR}"/> </locations> </cluster> </clusters> <locations> <location type="data" path="/tmp/falcon/next-vers-current/${YEAR}-${MONTH}-${DAY}-${HOUR}"/> <location type="stats" path="/none"/> <location type="meta" path="/none"/> </locations> <ACL owner="falcon" group="hadoop" permission="0755"/> <schema location="/none" provider="none"/> <properties><property name="queueName" value="oozie-launcher"/></properties> </feed>
[falcon@clusterA003 CURNEXT]$ falcon entity -type feed -submit -file next-master001-vers-current-master001.xml log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. falcon/default/Submit successful (feed) curVersNext001 [falcon@clusterA003 CURNEXT]$ falcon entity -type feed -schedule -name curVersNext001 log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. default/curVersNext001(feed) scheduled successfully [falcon@clusterA003 CURNEXT]$ [falcon@clusterA003 CURNEXT]$ [falcon@clusterA003 CURNEXT]$ falcon entity -type feed -name curVersNext001 -status log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. ERROR: Bad Request;default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Wrong FS: hdfs://clusterB001:8020/apps/falcon/bigdata-current-cluster/staging/falcon/workflows/feed/curVersNext001/7e307c2292e9b897d6a51f68ed17ac51_1467016396742, expected: hdfs://clusterA001:8020
Falcon error :
2016-06-27 10:55:48,797 INFO - [452378800@qtp-794075965-11 - 1e55add6-9a8f-42ca-9ad0-3885f55a5fe0:] ~ HttpServletRequest RemoteUser is null (Servlets:47) 2016-06-27 10:55:48,798 INFO - [452378800@qtp-794075965-11 - 1e55add6-9a8f-42ca-9ad0-3885f55a5fe0:] ~ HttpServletRequest user.name param value is falcon (Servlets:53) 2016-06-27 10:55:48,798 DEBUG - [452378800@qtp-794075965-11 - 1e55add6-9a8f-42ca-9ad0-3885f55a5fe0:] ~ Audit: falcon/10.98.138.87 performed request http://clusterA003:15000/api/options?user.name=falcon (10.98.138.87) at time 2016-06-27T08:55Z (FalconAuditFilter:86) 2016-06-27 10:55:49,045 INFO - [452378800@qtp-794075965-11 - c89efc19-496b-4281-a9d3-8f2632913218:] ~ HttpServletRequest RemoteUser is null (Servlets:47) 2016-06-27 10:55:49,045 INFO - [452378800@qtp-794075965-11 - c89efc19-496b-4281-a9d3-8f2632913218:] ~ HttpServletRequest user.name param value is falcon (Servlets:53) 2016-06-27 10:55:49,045 DEBUG - [452378800@qtp-794075965-11 - c89efc19-496b-4281-a9d3-8f2632913218:] ~ Audit: falcon/10.98.138.87 performed request http://clusterA003:15000/api/options?user.name=falcon (10.98.138.87) at time 2016-06-27T08:55Z (FalconAuditFilter:86) 2016-06-27 10:55:49,818 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:] ~ HttpServletRequest RemoteUser is falcon (Servlets:47) 2016-06-27 10:55:49,818 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Logging in falcon (CurrentUser:65) 2016-06-27 10:55:49,818 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Request from authenticated user: falcon, URL=/api/entities/delete/feed/curVersNext001, doAs user: null (FalconAuthenticationFilter:185) 2016-06-27 10:55:49,818 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authorizing user=falcon against request=RequestParts{resource='entities', action='delete', entityName='curVersNext001', entityType='feed'} (FalconAuthorizationFilter:78) 2016-06-27 10:55:49,819 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authenticated user falcon is proxying entity owner falcon/hadoop (CurrentUser:118) 2016-06-27 10:55:49,819 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authenticated user falcon is proxying entity owner falcon/hadoop (AUDIT:120) 2016-06-27 10:55:49,819 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authenticated user falcon is proxying entity owner falcon/hadoop (CurrentUser:118) 2016-06-27 10:55:49,819 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authenticated user falcon is proxying entity owner falcon/hadoop (AUDIT:120) 2016-06-27 10:55:49,819 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Authorization succeeded for user=falcon, proxy=falcon (FalconAuthorizationFilter:88) 2016-06-27 10:55:49,820 DEBUG - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Invoking method delete on service org.apache.falcon.resource.ConfigSyncService (IPCChannel:45) 2016-06-27 10:55:49,820 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Creating Oozie client object for http://master003.current.rec.mapreduce.m1.p.fti.net:11000/oozie/ (OozieClientFactory:50) 2016-06-27 10:55:49,919 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Creating FS for the login user falcon, impersonation not required (HadoopClientFactory:191) 2016-06-27 10:55:49,919 ERROR - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Unable to reach workflow engine for deletion or deletion failed (AbstractEntityManager:266) java.lang.IllegalArgumentException: Wrong FS: hdfs://clusterB001:8020/apps/falcon/bigdata-current-cluster/staging/falcon/workflows/feed/curVersNext001/7e307c2292e9b897d6a51f68ed17ac51_1467016396742, expected: hdfs://clusterA001:8020 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424) at org.apache.falcon.entity.EntityUtil.isStagingPath(EntityUtil.java:639) at org.apache.falcon.workflow.engine.OozieWorkflowEngine.findBundles(OozieWorkflowEngine.java:294) at org.apache.falcon.workflow.engine.OozieWorkflowEngine.doBundleAction(OozieWorkflowEngine.java:377) at org.apache.falcon.workflow.engine.OozieWorkflowEngine.doBundleAction(OozieWorkflowEngine.java:371) at org.apache.falcon.workflow.engine.OozieWorkflowEngine.delete(OozieWorkflowEngine.java:355) at org.apache.falcon.resource.AbstractEntityManager.delete(AbstractEntityManager.java:253) at org.apache.falcon.resource.ConfigSyncService.delete(ConfigSyncService.java:62) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.falcon.resource.channel.IPCChannel.invoke(IPCChannel.java:49) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$3.doExecute(SchedulableEntityManagerProxy.java:230) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$EntityProxy.execute(SchedulableEntityManagerProxy.java:577) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$3.execute(SchedulableEntityManagerProxy.java:219) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.delete_aroundBody2(SchedulableEntityManagerProxy.java:232) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy$AjcClosure3.run(SchedulableEntityManagerProxy.java:1) at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149) at org.apache.falcon.aspect.AbstractFalconAspect.logAroundMonitored(AbstractFalconAspect.java:51) at org.apache.falcon.resource.proxy.SchedulableEntityManagerProxy.delete(SchedulableEntityManagerProxy.java:206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.falcon.security.FalconAuthorizationFilter.doFilter(FalconAuthorizationFilter.java:108) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.falcon.security.FalconAuthenticationFilter$2.doFilter(FalconAuthenticationFilter.java:188) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:574) at org.apache.falcon.security.FalconAuthenticationFilter.doFilter(FalconAuthenticationFilter.java:197) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.falcon.security.FalconAuditFilter.doFilter(FalconAuditFilter.java:64) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.falcon.security.HostnameFilter.doFilter(HostnameFilter.java:82) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 2016-06-27 10:55:49,921 ERROR - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Action failed: Bad Request Error: Wrong FS: hdfs://clusterB001:8020/apps/falcon/bigdata-current-cluster/staging/falcon/workflows/feed/curVersNext001/7e307c2292e9b897d6a51f68ed17ac51_1467016396742, expected: hdfs://clusterA001:8020 (FalconWebException:83) 2016-06-27 10:55:49,922 ERROR - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ Action failed: Bad Request Error: default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Wrong FS: hdfs://clusterB001:8020/apps/falcon/bigdata-current-cluster/staging/falcon/workflows/feed/curVersNext001/7e307c2292e9b897d6a51f68ed17ac51_1467016396742, expected: hdfs://clusterA001:8020 (FalconWebException:83) 2016-06-27 10:55:49,922 INFO - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:falcon:DELETE//entities/delete/feed/curVersNext001] ~ {Action:delete, Dimensions:{colo=NULL, entityType=feed, entityName=curVersNext001}, Status: FAILED, Time-taken:102470925 ns} (METRIC:38) 2016-06-27 10:55:49,923 DEBUG - [452378800@qtp-794075965-11 - ea0ed7de-a518-4b54-9db2-51e33fde0176:] ~ Audit: falcon/10.98.138.87 performed request http://clusterA003:15000/api/entities/delete/feed/curVersNext001 (10.98.138.87) at time 2016-06-27T08:55Z (FalconAuditFilter:86)
Created 06-28-2016 08:53 AM
@mayki wogno Thanks for sharing the details. Can you please tell me the used Falcon version. I will replicate this and get back to you.
Created 06-28-2016 01:10 PM
@peeyush : this version is
Falcon 0.6.1.2.3
Created 07-01-2016 10:58 AM
@peeyush: I am also facing similar issue on the same version.
Created 07-01-2016 11:23 AM
Have you set dfs.internal.nameservices, and other properties explained here, so that from clusterA003 you can access hdfs on another cluster like:
hdfs dfs -ls hdfs://clusterA001:8020/
Created 07-11-2016 09:36 AM
@Predrag Minovic : I was successful in accessing HDFS of target cluster from source cluster using above command. But still giving the same error.
Created 07-01-2016 11:24 AM
@predrag: I've tried to put dfs.internal.nameservices but ambari failed to start Hdfs.
Created 07-01-2016 11:53 AM
Yes, Ambari doesn't support such settings, the next version due for release soon this summer should support it. In the meanwhile try to start NN from the command line. Ditto for other services (MR History Server, Yarn) if Ambari fails to start them.