Member since
08-29-2016
40
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2427 | 07-04-2018 07:33 AM | |
4304 | 05-11-2018 09:51 AM |
07-22-2018
02:37 PM
Recently I come across this command line Ldap connection tool, which is very useful while setting Ranger UserSync. This tool collects minimal input from admin about the ldap/AD server and discovers various properties for users and groups in order to successfully pull only targeted Users and Groups from the Ldap/AD server. Details Ldap Connection check tool is a command line tool and can be run on any machine where Java is installed and Ldap/AD server access is available. This tool can be used to discover not only user sync related properties but also authentication properties if needed. It also generates ambari configuration properties as well as install properties for manual installation. User is also provided an option to discover both the user and group properties together or separately. As part of the tool, a template properties file is provided for the user to update the values specific to the setup. Tool usage In order to learn details on how to use the tool, the tool also provides an “help” option (-h) as follows: usage: run.sh -a ignore authentication properties -d <arg> {all|users|groups} -h show help. -i <arg> Input file name -o <arg> Output directory -r <arg> {all|users|groups} All these above parameters are optional.
If “-i” (for input file) is not specified, the tool will fall back to CLI option for collecting values for mandatory properties if “-o” (for output directory) is not specified, the tool will write all the output files to the <install dir>/ranger-0.5.0-usersync/ldaptool/output directory if “-a” (for ignoring authentication) is not specified, the tool will discovery & verify authentication related properties. if “-d” (for discovering usersync properties) is not specified, the tool will default to discovering all the usersync related properties if “-r” (for retrieving users and/or groups) is not specified, the tool will fallback to “-d” option. Example Input properties In order to discover the usersync and authentication related properties, tool collects some mandatory information as part of the input properties. These Mandatory properties include: Mandatory properties include: 1.ranger.usersync.ldap.url (<ldap or ldaps>://<server ip/fqdn>:<port> 2.ranger.usersync.ldap.binddn (ldap user like AD user or ldap admin user) 3.ranger.usersync.ldap.bindpassword (user password or ldap admin password) 4. ranger.usersync.ldap.user.searchbase (Mandatory only for non AD environment) 5. ranger.usersync.ldap.user.searchfilter (Mandatory only for non AD environment) 6. ranger.admin.auth.sampleuser (Mandatory only for discovering authentication properties) 7. ranger.admin.auth.samplepassword (Mandatory only for discovering authentication properties) This tool provides two options for collecting values for these mandatory properties:
Modify the input.properties file provided as part of the tool installation and provide that file (with complete path as the command line argument while running the tool. Use CLI to input the values for these mandatory properties. CLI option is provided to the user when the input file is not provided as the command line option (-i <arg>) while running the tool. Once the values are collected from the CLI, these values are stored in the input.properties file (in the conf dir of the installation folder) for later use. Following is the CLI provided by the tool when input file is not specified: Ldap url [ldap://ldap.example.com:389]: Bind DN [cn=admin,ou=users,dc=example,dc=com]: Bind Password: User Search Base [ou=users,dc=example,dc=com]: User Search Filter [cn=user1]: Sample Authentication User [user1]: Sample Authentication Password: Note:- In order to use secure ldap, the java default truststore must be updated with the server’s self signed certificate or the CA certificate for validating the server connection. The truststore should be updated before running the tool.
... View more
Labels:
07-17-2018
11:31 AM
5 Kudos
Tokens are wire-serializable objects issued by Hadoop services, which grant access to services. Some services issue tokens to callers which are then used by those callers to directly interact with other services without involving the KDC at all.
Block Tokens
A BlockToken is the token issued for access to a block; it includes: (userId, (BlockPoolId, BlockId), keyId, expiryDate, access-modes) Block Keys
Key used for generating and verifying block tokens. Block Keys are managed in the BlockTokenSecretManager, one in the NN and another in every DN to track the block keys to which it has access. How this Works: 1. Client asks NN for access to a path, identifying via Kerberos or delegation token. 2. Client talks to DNs with the block, using the Block Token. 3. DN authenticates Block Token using shared-secret with NameNode. 4. if authenticated, DN compares permissions in Block Token with operation requested, then grants or rejects the request. The client does not have its identity checked by the DNs. That is done by the NN. This means that the client can in theory pass a Block Token on to another process for delegated access to a single block. These HDFS Block Tokens do not contain any specific knowledge of the principal running the Datanodes, instead they declare that the caller has stated access rights to the specific block, up until the token expires. public class BlockTokenIdentifier extends TokenIdentifier {
static final Text KIND_NAME = new Text("HDFS_BLOCK_TOKEN");
private long expiryDate;
private int keyId;
private String userId;
private String blockPoolId;
private long blockId;
private final EnumSet<AccessMode> modes;
private byte [] cache;
... To enable the NameNode block access token, configure the following settings in the hdfs-site.xml file: dfs.block.access.token.enable=yes
dfs.block.access.key.update.interval=600 (by default, minutes)
dfs.block.access.token.lifetime=600 (by default, minutes)
General Error Seen: 2015-09-22 12:55:48,271 WARN [regionserver60020-smallCompactions-1432895622947] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x1102b41c): could not load 1074240550_BP-607492251-xxx.xxx.xxx.xxx-1427711497172 due to InvalidToken exception.
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/blah/b83abaf5631c4ce18c9da7eaf569bb3b/t/bbb2436ed50e471e8645f8bd402902e3Block token with block_token_identifier (expiryDate=1442911790388, keyId=286785309, userId=hbase, blockPoolId=BP-607492251-xx.xx.xx.xx-1427711497172, blockId=1074240550, access modes=[READ]) is expired.
Root Cause: The block token access is expire and become invalid 2018-07-15 17:49:25,649 WARN datanode.DataNode (DataXceiver.java:checkAccess(1311)) -
Block token verification failed: op=WRITE_BLOCK, remoteAddress=/10.10.10.100:0000,
message=Can't re-compute password for block_token_identifier (expiryDate=1501487365624,
keyId=127533694, userId=RISHI, blockPoolId=BP-2019616911-10.211.159.22-1464205355083,
blockId=1305095824, access modes=[WRITE]), since the required block key (keyID=127533694)
doesn't exist.
Root Cause : This can be seen when a client connection fails because the client has presented a block access token that references a block key that does not exist in DataNode. To solve this restart the dataNode
... View more
Labels:
07-04-2018
07:33 AM
@Vinay check the below document: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/hive_policy.html Url : is basically for cloud storage path hiveservice :Enables a user who has Service Admin permission in Ranger to run the kill query API
... View more
06-30-2018
11:46 AM
Hadoop relies heavily on DNS, and as such performs many DNS lookups during normal operation. To reduce the load on your DNS infrastructure, it's highly recommended to use the Name Service Caching Daemon (NSCD) on cluster nodes running Linux or any other dns caching mechanism ( i.e dnsmasq ). This daemon will cache host, user, and group lookups and provide better resolution performance, and reduced load on DNS infrastructure. Default cache size and TTL value are good enough to reduce the significant load. However, you might need to tweak this as per your environment.
... View more
Labels:
06-30-2018
09:26 AM
@sk It seems like kerberos is not disabled propelry, Please share below command output /var/lib/ambari-server/resources/scripts/configs.py -a get -l <ambari-host> -t <ambari-port>
-n <cluster-name> -u <admin-username> -p <admin-password> -c kerberos-env
... View more
05-30-2018
07:33 PM
1 Kudo
The Key Distribution Center (KDC) is available as part of the domain controller and performs two key functions which are: Authentication Service (AS) and Ticket-Granting Service (TGS) By default the KDC requires all accounts to use pre-authentication. This is a security feature which offers protection against password-guessing attacks. The AS request identifies the client to the KDC in plain text. If pre-authentication is enabled, a time stamp will be encrypted using the user's password hash as an encryption key. If the KDC reads a valid time when using the user's password hash, which is available in the Active Directory, to decrypt the time stamp, the KDC knows that request isn't a replay of a previous request. When you do not enforce pre-authentication, a malicious attacker can directly send a dummy request for authentication. The KDC will return an encrypted TGT and the attacker can brute force it offline. Upon checking the KDC logs, nothing will be seen except a single request for a TGT. When Kerberos timestamp pre-authentication is enforced, the attacker cannot directly ask the KDCs for the encrypted material to brute force offline. The attacker has to encrypt a timestamp with a password and offer it to the KDC. The attacker can repeat this over and over. However, the KDC log will record the entry every time the pre-authentication fails. Hence you should never disable preauth in kerberos.
... View more
05-13-2018
08:46 AM
Can not start run livy jobs from Zeppelin The following can be seen in livy log file: INFO: Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, URL: http://rangerkms.example.com:9292/kms/v1/?op=GETDELEGATIONTOKEN&doAs=w20524%4542test.tt&renewer=r m%2Fsktudv01hdp02.test.tt%40CCTA.DK&user.name=livy, status: 403, message: Forbidden Root Cause: Missing proxy users for livy in kms.
The solution is to add the following into Ambari Custom kms-site hadoop.kms.proxyuser.livy.users=*
hadoop.kms.proxyuser.livy.hosts=*
hadoop.kms.proxyuser.livy.groups=*
... View more
Labels:
05-11-2018
09:51 AM
@Bhushan Kandalkar This has been fixed in HDP 2.5+
... View more
05-11-2018
09:22 AM
@Bhushan Kandalkar Check this out, you are using old version of hdp, hive knox ha support will be available with knox 0.7, For now you have to add the url in service section. https://issues-test.apache.org/jira/browse/KNOX-570
... View more
05-11-2018
09:18 AM
@Bhushan Kandalkar I would just like you to add , and try <service>
<role>HIVE</role>
<url>hive1</url>
<url>hive2</url>
</service>
... View more