Support Questions
Find answers, ask questions, and share your expertise

Re: [RESOLVED] [FALCON] : failed to schedule a Feed

Explorer

Only all components of clusterA have restarted, falcon server is on clusterA.

Re: [RESOLVED] [FALCON] : failed to schedule a Feed

Explorer

I found the bug when schedule the feed, the option -skipDryRun must be set.

[falcon@clusterA ~]$ falcon entity -type feed -submit -file replication-next-current.xml
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
falcon/default/Submit successful (feed) replication-next-current

[falcon@clusterA ~]$ falcon entity  -type feed -schedule -name replication-next-current
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ERROR: Bad Request;default/org.apache.falcon.FalconWebException::org.apache.falcon.FalconException: Entity schedule failed for feed: replication-next-current

[falcon@clusterA ~]$ falcon entity  -type feed -schedule -name replication-next-current -skipDryRun
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
default/replication-next-current(feed) scheduled successfully

But I've another issue :

On TargetCluester (clusterB), the workflow has failed due to permission denied :

          Parent FALCON_FEED_RETENTION_replication-next-current :
        Workflow FALCON_FEED_RETENTION_replication-next-current
      GraphActionsDetailsConfigurationLogDefinition
            
              0009963-160510161955685-oozie-oozi-W@eviction
           
            START_MANUAL
            JA009             JA009: Permission denied: user=falcon, access=WRITE, inode="/user/falcon/oozie-oozi/0009963-160510161955685-oozie-oozi-W/eviction--java.tmp":hdfs:hdfs:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
	at org.apache.hadoop.hdfs.serv

Here the right permission on /usr/falcon (clusterA) :

drwxrwxr-x   - falcon hadoop          0 2016-05-24 15:02 /user/falcon/oozie-oozi

drwxrwxr-x   - falcon hadoop          0 2016-05-20 11:56 /user/falcon/oozie-oozi/0000000-160520114312397-oozie-oozi-W
drwxrwxr-x   - falcon hadoop          0 2016-05-24 15:02 /user/falcon/oozie-oozi/0000340-160520114312397-oozie-oozi-W
drwxrwxr-x   - falcon hadoop          0 2016-05-24 14:19 /user/falcon/oozie-oozi/0009847-160510161955685-oozie-oozi-W

the hdfs user can write on /user/falcon from clusterB :

hdfs dfs -touchz  hdfs://clusterA:8020/user/falcon/oozie-oozi/0009847-160510161955685-oozie-oozi-W/test
[hdfs@clusterB ~]$ hdfs dfs -ls  hdfs://clusterA:8020/user/falcon/oozie-oozi/*/*

-rw-rw-r--   3 hdfs   hadoop          0 2016-05-24 14:19 hdfs://clusterA:8020/user/falcon/oozie-oozi/0009847-160510161955685-oozie-oozi-W/test