Member since
07-30-2019
181
Posts
205
Kudos Received
51
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4961 | 10-19-2017 09:11 PM | |
1591 | 12-27-2016 06:46 PM | |
1238 | 09-01-2016 08:08 PM | |
1180 | 08-29-2016 04:40 PM | |
3013 | 08-24-2016 02:26 PM |
08-11-2016
01:01 PM
@jovan karamacoski If you want to manually force the blocks to replicate to fix under-replication, you can use the method outlined in this article.
... View more
08-10-2016
09:32 PM
1 Kudo
@Saikrishna Tarapareddy The attribute to correlate on needs to be present in the flowfile for the Merge processor to use it. If you are using FetchFile to get the file, you can add an attribute into that processor using the filename or the substring of the file name. Then it will be present in the flowfile for subsequent processors to use.
... View more
07-19-2016
06:28 PM
1 Kudo
@Alvin Jin You should be able to connect to HANA via a JDBC connection using the DBCPConnectionPool Controller object, then use the ExecuteSQL processor to submit queries.
... View more
07-14-2016
07:43 PM
@Kaliyug Antagonist You will typically need to do some configuration on the views to make them work properly. In a secured cluster, you have to specify all of the parameters for connecting to the particular service instead of using the "Local Cluster" configuration drop down. The Ambari Views Documentation contains instructions for configuring all of the various views.
... View more
07-14-2016
07:29 PM
2 Kudos
@Satya KONDAPALLI Fundamentally, Spark is a data processing engine while NiFi is a data movement tool. Spark is intended for doing complex computations on large amounts of data, combining data sets, applying analytical models, etc. Spark Streaming provides micro batch processing of data to bring this processing closer to real time. NiFi is intended to collect data and move it to the place for it to be processed with some certain modifications or computations on the data as it flows to its final destination.
... View more
07-08-2016
04:02 PM
2 Kudos
@Steen Manniche You are correct that Solr/Ranger is only capable of collection level security at this time. This Solr Wiki page describes a couple of options for adding document level security to Solr (e.g. ManifoldCF)
... View more
07-05-2016
05:37 PM
2 Kudos
@Kaliyug Antagonist Hue requires Python 2.6 while RHEL/CentOS 7 uses Python 2.7. This is why Hue is not supported on RHEL/CentOS 7. There are alternatives in Ambari for most of the functionality provided by Hue. Amber includes views for Hive access, HDFS File management, YARN queue management, Pig scripting, and Tez job management. The only functionality of Hue that isn't covered by Ambari Views is the Oozie Workflow creation. This functionality is coming soon in Ambari. Please see the Ambari Views documentation for more information. If you need to use Hue for Oozie workflow management, You can install Hue on a RHEL/CentOS 6 node and configure it to point to your HDP cluster.
... View more
07-01-2016
07:16 PM
@khaja pasha shake The truststore needs to exist on the node where you are running the Falcon commands (e.g. the Falcon server node). You can create the keystore with the keytool command and import the certificate into that node's keystone. Then specify the location on the Falcon server node for the keystore .
... View more
06-28-2016
05:18 PM
@Kaliyug Antagonist HDFS has the ability to use ACLs (here's a link). If you don't have Ranger, then you can use ACLs to provide finer grained authorization than you can with POSIX permissions. However, if using Ranger, there is more flexibility and you have a single place to manage authorization for all of the components (not just HDFS). So, if you're using Ranger, you don't really need you use HDFS ACLs.
... View more
06-28-2016
03:45 PM
@khaja pasha shake In the configuration for the Hive view, you can add the SSL parameters to the authorization section. Here is a screenshot that should help:
... View more