Member since
09-15-2015
75
Posts
33
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1399 | 02-22-2016 09:32 PM | |
2246 | 12-11-2015 03:27 AM | |
8336 | 10-26-2015 10:16 PM | |
7353 | 10-15-2015 06:09 PM |
05-05-2022
04:36 PM
As a general statement this is not right by any means. LDAP provides secure and encrypted authentication (encrypted user password and SSL/TLS communication) , together with user/group management. It's only the Hadoop stack does not support this and the two only autentication methods implemented for all the CDP components are the dummy simple auth (described above) and the Kerberos authentication (used in combination with PAM or LDAP for user/group mappings). As an example, nothing less than Knox (the security gateway to HDP or CDP) implements full authenticacion using only LDAP (with TLS), and it only relies on Kerberos to authenticate a single service/proxy user to communicate with the rest of the cluster.
... View more
03-30-2018
02:41 AM
Could you explain “Oversharding can be used for performance reasons where all machines has shards for specific replica” in more detail? Do you mean implict shards?
... View more
10-29-2018
06:48 PM
This information (as many others) is wrong in the official HDP Security course from Hortonworks. In the HDFS Encryption presentations of the course it states that to create an HDFS admin user to manage EZ is enough with setting the following (copy/paste here): dfs.cluster.administrators=hdfs,encrypter
hadoop.kms.blacklist.DECRYPT_EEK=hdfs,encrypter
... View more
05-06-2016
09:47 PM
1 Kudo
This looks like basic ssh connectivity is having an issue. Some things to try: Verify that ssh centos@<any-ec2-host> works. If it does not, run ssh -vvv centos@<any-ec2-host> to troubleshoot the issue. If you're running a custom ~/.ssh/config, make sure that you're not specifying a non-default (id_rsa) key.
... View more
02-22-2016
09:32 PM
2 Kudos
Scott, there's two layers of memory settings that you need to be aware of - NodeManager and Containers. NodeManager has all the available memory it can provide to containers. You want to have more containers with decent memory. Rule of thumb is to use 2048MB of memory per container. So if you have 53GB of available memory per node, then you have about 26 containers available per node to do the job. 8GB of memory per container IMO is too big. We don't know how many disks are there to be used by Hadoop from the SAN storage. You can disregard the disks in the equation as the formula is typically done for on-premise clusters. But you can run a manual calculation of the memory settings since you have the minimum container per node and memory per container values (26, 2048MB respectively). You can use the formula below. Just replace the # of containers per node and RAM per container with your values. Please note that 53GB of available ram per vm is too big knowing it only has 54GB RAM. Typically, you would want to set aside about 8GB for other processes - OS, HBase, etc. which means available memory per node is just 46GB. Hope this helps.
... View more
02-23-2016
05:26 PM
@Neeraj Sabharwal I just created an updated integration guide using the latest HDP version 2.3.4/Ambari 2.2 and Centrify Server Suite 2016. (all worked great) We will be publishing my updates publicly in the next week or two but I have extensive notes on many of the configurations and common problems @rgarcia detailed here. If anyone needs assistance or has any questions regarding Centrify components, I will now be here to help.
... View more
11-20-2015
11:18 PM
@rgarcia Why not pipe the data from HDFS, assuming audit is being written to HDFS as well?
... View more
08-08-2018
09:58 PM
, 1.add this command yum-config-manager --add-repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.1.0/ambari.repo 2. yum upgrade ambari-metrics-monitor ambari-metrics-hadoop-sink 3. yum install ambari-metrics-hadoop-sink Ambari-metrics installed
... View more
10-29-2015
03:19 AM
I was finally able to resolve it. Somehow the DN for the LDAP Manager changed. Was: CN=adadmin,OU=MyUsers,DC=AD-HDP,DC=COM
Now: CN=adadmin,DC=AD-HDP,DC=COM Appreciate the hint their Paul.
... View more
10-26-2015
11:56 PM
@rgarcia@hortonworks.com Remove "use_fully_qualified_names=True" and it should fix the issue.
... View more