Member since
09-25-2015
230
Posts
276
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
24820 | 07-05-2016 01:19 PM | |
8235 | 04-01-2016 02:16 PM | |
2062 | 02-17-2016 11:54 AM | |
5541 | 02-17-2016 11:50 AM | |
12477 | 02-16-2016 02:08 AM |
08-08-2016
03:14 PM
Hi @Berk Ardıç, You can achieve this type of functionality by modifying a couple additional pieces of the flow. First, you can set the GetSFTP to search recursively from your mounted directory. This will traverse the entire path rooted at your target location so it will pick up files from Store1 and Store2 directories. You then have the ability to limit this by leveraging the regex filter properties for the path and the file. This will handle the pickup side of flow. Then, on the delivery side, you can leverage the path attribute from the flowfile to construct a new destination in HDFS that mirrors the structure from the pickup directory. You can use NiFi expression language in the destination for PutHDFS to construct the appropriate path. Hope this helps.
... View more
09-21-2017
06:00 PM
1 Kudo
@Neeraj Sabharwal I am still facing the same issue, can you please help. Ambari version been used here is 2.2.2.0 with postgresql 9.2.18 on rhel 7. This happens to us most of the time. I have made sure that agents on all hosts are running and are pointing toward the ambari server, iptables, selinux are disabled. /etc/hosts are updated correctly. I am registering Ambari blueprint via API through Ansible and then cluster creation template also via API through Ansible in automated way. What could be the reason behind this?
... View more
06-29-2017
02:01 PM
the answer was given by Guilherme Braccialli . in your /tmp/<user>/hive.log you would see a message like this 2017-06-28 10:04:43,717 INFO [main]: parse.BaseSemanticAnalyzer (CalcitePlanner.java:canCBOHandleAst(397)) - Not invoking CBO because the statement has too few joins
... View more
08-23-2016
07:35 AM
@kishore sanchina I'm this example I started Spark thrift server on port 10010 and connected using beeline to same port. You can use default port 10015 instead.
... View more
01-10-2017
09:41 PM
@Shihab That worked for me. Thanks so much. I also had to delete /system/diskbalancer.id to run it successfully. But for some reason I have to do this for every rebalancer I run.
... View more
04-11-2016
07:19 PM
@Guilherme Braccialli / Neeraj Sabharwal What is the best way to backup the Ranger database 0.5.0(for the purpose of disaster recovery). I see that the REST API is available. Can you please share your experience if you have implemented this in your organization? Thank you.
... View more
12-11-2015
02:43 AM
@Andrea D'Orio Thanks for sharing, indeed we need information like this. keep sharing 🙂
... View more
12-31-2015
03:03 AM
You can install the latest HDP 2.3.4 using Ambari 2.2.0.0: it comes with Spark 1.5.2 and its integrated with ATS
... View more
06-24-2019
05:26 PM
As a complement to Matt Foley's answer: concerning MLOptimizer, I think they were either meaning generic optimization algorithms such as Gradient Descent, available in mllib.optimization package (see https://spark.apache.org/docs/2.3.0/mllib-optimization.html), or they were meaning ML algorithm hyper-parameter optimization. Hyper-parameter tuning using e.g. cross-validation and grid-search is available in the Spark ML tuning package (see https://spark.apache.org/docs/2.2.0/ml-tuning.html). However, if they were meaning automatic hyper-parameter optimization using for example Bayesian optimization, then I would like to know more about it...
... View more
11-12-2015
12:11 AM
Got it syncing to the hub! So if i understand this correct, now if I want to sync these notebooks to another zeppelin, i just put in the same "hub_api_token" in that zeppelin and will it sync to that zeppelin instance? Or is that a feature that's not developed yet?
... View more