Member since
05-10-2016
303
Posts
35
Kudos Received
0
Solutions
11-16-2016
03:28 PM
I have related question: how to deploy multiple nifi components to all the HDF cluster nodes. And, then the question to ask is just like you ask above. I use Ali's HDF ambari-bootstrap blog's approach to automatically deploy the blueprint-based HDF 2.0.1 cluster. But, it has only one NIFI in one of the specific node within the HDF cluster. I am trying to figure our how to deploy NIFI to all the HDF cluster node.
... View more
10-05-2016
11:15 AM
@mayki wogno Unfortunately, you cann't get job related information however you can guess the job from above information. Username - zazi Service principal - oozie/master003@fma.com Host from where file was accessed - 10.xx.224.9 Please accept the answer if this was helpful.
... View more
09-29-2016
05:37 PM
1 Kudo
For Python: I'd recommend installing python Anaconda 2.7 on all nodes of your cluster. If your developer would like to manually add python files/scripts, he can use the --py-files argument as part of the spark-submit statement. As an alternative, you can also reference python scripts/files from within your pyspark code using addPyFile, such as sc.addPyFile("mymodule.py"). Just as an FYI, PySpark will run fine if you have python 2.6 installed, but you will just not be able to use the more recent packages. For R: As @lgeorge mentioned, you will want to install R (and all required packages) to each node of your cluster. Also make sure your JAVA_HOME environment variable is set, then you should be able to launch SparkR.
... View more
09-29-2016
08:50 AM
There is absolutely nothing wrong with having a node as both cluster coordinator and primary node. These are two different roles that can be done on the same node.
... View more
09-22-2016
11:20 AM
@mayki wogno When asking new questions unrelated to the current thread, please start a new Community Connection question. This benefits the community at large who may be searching for answers to the same question.
... View more
06-29-2016
04:51 PM
More detail about renew token error from workflow : 2016-06-29 18:46:00,127 DEBUG HadoopAccessorService:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Checking if filesystem hdfs is supported
2016-06-29 18:46:00,129 DEBUG HiveActionExecutor:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Submitting the job through Job Client for action 0000101-160629105530892-oozie-oozi-W@table-export
2016-06-29 18:46:00,131 DEBUG HiveActionExecutor:526 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] ADDING TOKEN: HIVE_DELEGATION_TOKEN_
2016-06-29 18:46:01,145 WARN ActionStartXCommand:523 - SERVER[xxxx] USER[falcon] GROUP[-] TOKEN[] APP[FALCON_FEED_REPLICATION_replication-feed-hive] JOB[0000101-160629105530892-oozie-oozi-W] ACTION[0000101-160629105530892-oozie-oozi-W@table-export] Error starting action [table-export]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)]
org.apache.oozie.action.ActionExecutorException: JA009: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:440)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1139)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1293)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1124)
... 10 more
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1467203595416_0440 to YARN : Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:bigdata-next, Ident: (HDFS_DELEGATION_TOKEN token 10947 for falcon)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:271)
at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:291)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
... 25 more
... View more
05-30-2016
02:18 PM
Great Thanks.
... View more
05-27-2016
12:37 PM
Hi again, There is something weird in the workflow FALCON_FEED_RETENTION , the feedDataPath is wrong feedDataPath
DATA=hdfs://clusterA:8020/tmp/falcon/next-vers-current/?{YEAR}/?{MONTH}/?{DAY}/?{HOUR} for FALCON_FEED_REPLICATION, the feedDataPath is correct : distcpSourcePaths
hftp://clusterA:50070/tmp/falcon/next-vers-current/2016/05/27/12
distcpTargetPaths
hdfs://clusterB/tmp/falcon/next-vers-current/2016/05/27/12/ What's wrong in my feed-replication.xml ?
... View more
05-29-2016
01:43 PM
Hi @mayki wogno, I see that you marked your question as "Resolved". If my answer below helped you can you please accept it. If you resolved your issue by other means please publish them, and we'll accept your answer. Instead of marking questions as "resolved" we consider them resolved if they are accepted. Tnx!
... View more
- « Previous
- Next »