Member since
07-30-2019
181
Posts
205
Kudos Received
51
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4991 | 10-19-2017 09:11 PM | |
1602 | 12-27-2016 06:46 PM | |
1244 | 09-01-2016 08:08 PM | |
1183 | 08-29-2016 04:40 PM | |
3029 | 08-24-2016 02:26 PM |
06-28-2016
03:05 PM
4 Kudos
@Timothy Spann The best way to accomplish this is with the GetTwitter processor and the MergeContent processor. GetTwitter will connect up to a Twitter dev account and pull tweets (you can even filter them). Then you can use MergeContent to collect the tweets into manageable pieces based on number of records, size of file, or a timeout value.
... View more
06-28-2016
03:02 PM
1 Kudo
@khaja pasha shake To use SSL for Hiveserver2, you will need to first enable SSL for Hiveserver2. Assuming you've already done that, your JDBC connection string will need to look something like: jdbc:hive2//server.name:10000/mydb;ssl=true;sslTrustStore=/path/to/truststore.jks;trustStorePassword=MyBadPass1
... View more
06-28-2016
02:49 PM
1 Kudo
@Kaliyug Antagonist HDFS security is multi-tiered: Ranger authorization policies are checked first HDFS ACLs implemented outside of Ranger HDFS POSIX permissions (e.g. rwxr-xr-x) So, what you can do for user home directories is to set the POSIX permissions to 700 and make sure the ownership is <username>:hdfs. This will ensure that only the user has access to his/her home directory. You don't need to create a Ranger policy to allow the access for this. You can do the same for the /tmp directory (set permissions to 777). There are some best practices for securing HDFS with Ranger.
... View more
06-24-2016
05:35 PM
@Benjamin R The permissions of the keytabs are not all 440. Only some of them are (hdfs, hbase, ambari-qa, accumulo, and spnego). Those keytabs are used by other services than just the owner for testing connectivity, and other functions on startup. All of the other keytabs are only readable by the service account that owns the file. The group that owns the keytabs is the hadoop group, and should be reserved for service accounts. This will ensure the security of your cluster.
... View more
06-24-2016
05:12 PM
@David Whoitmore Yes, you can install an alternative version of Python. You will need to install it in a non-system location (leave 2.6 in place and put 2.7 in a new home). Many of the HDP components rely on Python and require v2.6 in the standard place on RedHat 6 in order to work properly.
... View more
06-18-2016
05:28 PM
@Bhanu Pittampally The components that are supported by Ranger have plugins that the components use to verify authorization. For example, if Presto wants to read from HDFS, the it will contact the NameNode. The NameNode will use the HDFS Ranger plugin to check the authorization for the presto user on the files trying to be accessed.
... View more
06-17-2016
07:06 PM
@Bhanu Pittampally At present, Presto is not supported with Ranger. You can control the HDFS and Hive services via Ranger to enable Presto to access those resources, but there won't be any control for Presto security mechanisms. Currently, the supported components in Ranger (0.5) are HDFS, YARN, Hive, HBase, Kafka, Storm, Knox, and Solr.
... View more
06-15-2016
03:53 PM
@Matjaz Skerjanec An HTTP 403 error indicates that the user does not have permission to access something on the server (403 = "Forbidden"). This would be returned by the Isilon node in response to your request for a fsck command. Isilon uses its Integrity Check mechanism to ensure filesystem integrity. Remember that HDFS is just an interface that Isilon provides to the OneFS filesystem. Filesystem integrity checks will be handled internally by OneFS and command like "hdfs fsck" become unnecessary.
... View more
06-13-2016
06:02 PM
2 Kudos
@Teddy Brewski This can be done! Install the Knox server on multiple hosts (can be done by going to Hosts -> hostname -> Add Service -> Knox Gateway). Create a config group for Knox and assign nodes to each config group (Knox -> Configs -> Manage Config Groups) Modify the Advanced Topology for each config group (accessed with the drop down at the top of the Configs page) to change the AD configuration as appropriate.
... View more
06-09-2016
10:33 PM
2 Kudos
@Timothy Spann This document details the best practices for Isilon data storage in a Hadoop environment. It's not specific to HDP 2.4, or OneFS 8.0.0.1, but most of the information is still relevant.
... View more