Member since
04-11-2016
471
Posts
325
Kudos Received
118
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2070 | 03-09-2018 05:31 PM | |
2631 | 03-07-2018 09:45 AM | |
2529 | 03-07-2018 09:31 AM | |
4388 | 03-03-2018 01:37 PM | |
2468 | 10-17-2017 02:15 PM |
10-23-2016
10:39 AM
@Pierre Villard I got it working with -D mapred.job.name=mySqoopTest
... View more
08-16-2016
08:25 AM
@mclark Thanks for the response and appreciated. Do I need to configure something at back-end as well i.e. in nifi.properties or any other file in cluster or node because I am facing attached error.
... View more
08-09-2016
07:35 AM
HI Pierre, We would need to look at the code. Can you a do a persist just before stage 63 and before stage 65 check the spark UI storage tab and executor tab for data skew. If there is data skew, you will need to add a salt key to your key. You could also look at creating a dataframe from the RDD rdd.toDF() and apply UDF on it. DF manage memory more efficiently. Best, Amit
... View more
08-29-2016
03:37 PM
@Alvin Ji,
This is correct with NiFi 0.x. Unless you implement your own MapCacheServer service and separate it from NiFi, I am not sure there is a solution. With NiFi 1.x (first version to be released in coming days, RC vote in progress), this is solved with a zero-master clustering paradigm.
... View more
07-25-2016
07:50 AM
2 Kudos
Hi @Obaid Salikeen, Another option is to use ExecuteProcess or ExecuteStreamCommand to execute a custom script that will SCP to your remote Linux instance. Otherwise there is a JIRA for a SCP processor: https://issues.apache.org/jira/browse/NIFI-539 Hope this helps.
... View more
06-17-2016
03:18 PM
Answering my question, I was just not using the concat function correctly... the syntax is: ${fs:exists(concat(wf:actionData('hdfs-lookup')['outputPath'], '/2'))}
... View more
06-14-2016
08:57 AM
Thanks @ Pierre Villard This is what i was expecting... Thanks once again...
... View more