Member since
05-10-2016
303
Posts
35
Kudos Received
0
Solutions
09-22-2016
09:49 AM
I've got another question about multi-dataflows in the same cluster. I've two projects (Project1, Project2) with two DFM's (team1 and team2). It is possible to use the same cluster for sharing resources (CPU/MEMORY/DISKS) but each DFM has their own WEB UI for manage their own dataflow ? Or each cluster has only one WEB UI with one only dataflow ?
... View more
09-22-2016
08:00 AM
Thanks for yours expla
... View more
09-21-2016
05:15 PM
So many thanks for yours answers. As I understand the term 'cluster' is not concerning redundancy or HA dataflow. It is just Nifi nodes share same dataflow but have separate data. In the gethttp, each webserver sends theirs dates to one Nifi node at the time. So web1 sends data to node1, web2 sends to node2 etc.. If node1 down, the web1 needs to wait until node1 go up again.
... View more
09-21-2016
03:41 PM
Content repository has his own data, so what's happen for this local data? You told about shared data on the node.. What's exactly happen. Each node is connected to a specific processor?
... View more
09-21-2016
03:41 PM
Content repository has his own data, so what's happen for this local data? You told about shared data on the node.. What's exactly happen. Each node is connected to a specific processor?
... View more
09-21-2016
03:26 PM
Hi all,
I've read the documentation about HDF 2.0 concerning dataflow and cluster. http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.0/bk_administration/content/clustering.html Why Cluster?
NiFi Administrators or Dataflow Managers (DFMs) may find that using one instance of NiFi
on a single server is not enough to process the amount of data they have. So, one solution is to run the same dataflow on multiple NiFi servers. However, this creates a management problem,because each time DFMs want to change or update the dataflow, they must make those changes on each server and then monitor each server individually. By clustering the NiFi servers, it's possible to have that increased processing capability along with a single interface through which to make dataflow changes and monitor the dataflow. Clustering allows the DFM to make each change only once, and thatchange is then replicated to all the nodes of the cluster. Through the single interface, the DFM may also monitor the health and status of all the nodes.
NiFi Clustering is unique and has its own terminology. It's important to understand the following terms before setting up a cluster.
My questions : - "Each node in the cluster performs the same
tasks on the data, but each operates on a different set of data" - "To run same dataflow on multiple Nifi Servers" ==> what's happen exactly on the nifi nodes, an example of use case ? ==> What's happen if node failed ?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
06-29-2016
04:51 PM
More detail about renew token error from workflow : 2016-06-29 18:46:00,127 DEBUG HadoopAccessorService:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Checking if filesystem hdfs is supported
2016-06-29 18:46:00,129 DEBUG HiveActionExecutor:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Submitting the job through Job Client for action 0000101-160629105530892-oozie-oozi-W@table-export
2016-06-29 18:46:00,131 DEBUG HiveActionExecutor:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] ADDING TOKEN: HIVE_DELEGATION_TOKEN_
2016-06-29 18:46:01,145 WARN ActionStartXCommand:523 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Error starting action [table-export]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)]
org.apache.oozie.action.ActionExecutorException: JA009: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:440)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1139)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1293)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1124)
... 10 more
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
... 25 more
... View more
06-29-2016
02:26 PM
user falcon can submit yarn jar [falcon@master003 hadoop-mapreduce-historyserver]$ yarn jar hadoop-mapreduce-examples-2.7.1.2.3.4.0-3485.jar pi 16 1000
Number of Maps = 16
Samples per Map = 1000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
16/06/29 16:27:07 INFO impl.TimelineClientImpl: Timeline service address: http://xxxx:8188/ws/v1/timeline/
16/06/29 16:27:07 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 10914 for falcon on ha-hdfs:bigdata-next
16/06/29 16:27:07 INFO security.TokenCache: Got dt for hdfs://bigdata-next; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10914 for falcon)
16/06/29 16:27:07 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
16/06/29 16:27:08 INFO input.FileInputFormat: Total input paths to process : 16
16/06/29 16:27:08 INFO mapreduce.JobSubmitter: number of splits:16
16/06/29 16:27:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467207300026_0007
16/06/29 16:27:08 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10914 for falcon)
16/06/29 16:27:08 INFO impl.YarnClientImpl: Submitted application application_1467207300026_0007
16/06/29 16:27:08 INFO mapreduce.Job: The url to track the job: http://xxxxx:8088/proxy/application_1467207300026_0007/
16/06/29 16:27:08 INFO mapreduce.Job: Running job: job_1467207300026_0007
16/06/29 16:27:17 INFO mapreduce.Job: Job job_1467207300026_0007 running in uber mode : false
... View more
06-29-2016
02:08 PM
Probably it runs in this bug : https://issues.apache.org/jira/browse/YARN-3021 My yarn version is : 2.7.1.2.3
... View more
06-29-2016
01:46 PM
@Rahul Pathak : i've add webhcat.proxyuser.oozie.hosts=* to hive service but i've have new error : JA009: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit
application_1467203595416_0099 to YARN : Failed to renew token: Kind:
HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident:
(HDFS_DELEGATION_TOKEN token 10848 for falcon)
... View more