Member since
08-29-2016
40
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2460 | 07-04-2018 07:33 AM | |
4369 | 05-11-2018 09:51 AM |
05-11-2018
08:43 AM
@Bhushan Kandalkar Please share the below command output #/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server hadmgrndcc03-3.test.org:2181 ls /hiveserver2 | tail
... View more
05-11-2018
08:05 AM
@Bhushan Kandalkar This issue may occur when there is unnecessary whitespaces and line-feed in Knox Topology like below: <param> <name>HIVE</name> <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true;zookeeperEnsemble=zk1:2181,zk2:2181,zk3:2181; zookeeperNamespace=hiveserver2</value> </param> Change above like below, then restart Knox: <param> <name>HIVE</name> <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true;zookeeperEnsemble=zk1:2181,zk2:2181,zk3:2181;zookeeperNamespace=hiveserver2</value> </param> if not then please share your knox topology.
... View more
05-11-2018
08:05 AM
@Bhushan Kandalkar This issue may occur when there is unnecessary whitespaces and line-feed in Knox Topology like below: <param> <name>HIVE</name> <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true;zookeeperEnsemble=zk1:2181,zk2:2181,zk3:2181; zookeeperNamespace=hiveserver2</value> </param> Change above like below, then restart Knox: <param> <name>HIVE</name> <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true;zookeeperEnsemble=zk1:2181,zk2:2181,zk3:2181;zookeeperNamespace=hiveserver2</value> </param> if not then please share your knox topology.
... View more
02-25-2018
11:34 AM
We can configure the Zeppelin UI for access over SSL. Step 1 : Create a keystore For Self-Signed Certificate: ### Generate a keystore
# keytool -genkey -alias zeppelin -keyalg RSA -dname "CN=$HOSTNAME,OU=IT,O=HWX,L=Bangalore,S=KA,C=IN" -keystore /etc/zeppelin/conf/zeppelin-keystore.jks -keysize 2048 -validity 365 -keypass hadoop -storepass hadoop
### Export self-signed certificate to a pem file
# keytool -exportcert -keystore /etc/zeppelin/conf/zeppelin-keystore.jks -alias zeppelin -file zeppelin.pem -rfc For CA-Signed Certificate: ### Get certificate from you CA and create a PKCS12 keystore
# openssl pkcs12 -export -inkey zeppelin.key -in zeppelin.pem -certfile /path/to/ca/certificate/ca.pem -out zeppelin.pfx
### Convert pkcs12 keystore to jks keystore
# keytool -v -importkeystore -srckeystore zeppelin.pfx -srcstoretype PKCS12 -destkeystore /etc/zeppelin/conf/zeppelin-keystore.jks -deststoretype JKS -srcalias 1 -destalias $(hostname)
### Validate whether Privatekey and cert chain is present
# keytool -list -keystore /etc/zeppelin/conf/zeppelin-keystore.jks -v Step 2 : Import certificate(s) to a truststore # keytool -import -file zeppelin.pem -alias zeppelin -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit Note: For CA sign certificates, kindly import certificate chain (CA and IntermediateCA certificates) to truststore using below command # keytool -import -file <certificate> -alias <alias> -keystore $JAVA_HOME/jre/lib/security/cacerts Step 3 : In Ambari go to Zeppelin ---> Config ---> Advance make the following below changes Zeppelin.ssl = true
Zeppelin.ssl.client.auth = false
Zeppelin.ssl.key.manager.password = hadoop
Zeppelin.ssl.keystore.password = hadoop
Zeppelin.ssl.keystore.path = /etc/zeppelin/conf/zeppelin-keystore.jks
Zeppelin.ssl.keystore.type = JKS
Zeppelin.ssl.truststore.password = changeit
Zeppelin.ssl.truststore.path = /<JAVA-HOME-PATH>/jre/lib/security/cacerts
Zeppelin.ssl.truststore.type = JKS Step 4 : Restart the zeppelin Service and access this over https://<zeppelin_host>:9995
... View more
Labels:
01-18-2018
07:26 PM
1 Kudo
Ranger plugins send their audit event (whether access was granted or not and based on the policy) directly to the configured sink for audits, which can be HDFS, Solr or both.
Ranger Audit is a highly customizable event queue system that can be tailored to suit the needs of production environments.
When the plugin is enabled and no specific policy is in place for access to some object, the plugin will fall back to enforcing the standard component level Access Control Lists (ACL’s). For HDFS, that would be the user: rwx / group: rwx / other: rwx ACL’s on folders and files.
Once this defaulting to component ACL’s happens, the audit events show a ‘ - ‘ in the ‘Policy ID’ column instead of a policy number. If a Ranger policy was in control of allowing/ denying, the policy number is shown.
Key Things to Remember
Access decisions taken by Ranger (to allow/ deny user) are based on a combination of three things:
resource - that is being accessed
user/group - who is trying to access
operation - that is being performed
The Audit decision taken by Ranger (whether to audit or not) are based on a matching resource. That is, if there is a policy that allows audit for a certain resource, then the audit will be performed irrespective of whether that policy is governing access policy or not.
Now, based on #1 and #2 above, depending on the policy configuration, it is very much possible that access decision is taken by policy X, but audit decision is taken by policy Y.
Note: Sometimes this may seem confusing that audit events show an X in the Policy ID column even though the audit is disabled for X. Remember that the Policy ID column decided on access decision, but audit decision is coming from another policy.
How to Troubleshoot Ranger Audit issue?
Enable the Ranger plugin debug and restart the host service again to get to the root cause of the error.
To get further granular level behavior and to understand enabling policyengine and policyevaluator, debug as follows:
Example:
The following log4j lines will change based on the host service and log4j module used in that service:
log4j.logger.org.apache.ranger.authorization.hbase=DEBUG
log4j.logger.org.apache.ranger.plugin.policyengine=DEBUG
log4j.logger.org.apache.ranger.plugin.policyevaluator=DEBUG
... View more
Labels:
10-16-2017
07:52 PM
I think multi-user should run fine...however suspecting resource allocation issue here. zeppelin only support yarn-client for spark interpreter which means the driver will run on the same host as zeppelin server. And if you run spark interpreter in shared mode, then all the user share the same SparkContext. you should increase executor size and executor core in interpreter setting.
... View more
06-30-2017
11:05 AM
Labels:
- Labels:
-
Apache Hadoop
05-03-2017
05:30 PM
can you check out the mapping rules : $ hadoop org.apache.hadoop.security.HadoopKerberosName <username>
... View more
03-24-2017
02:53 PM
Labels:
- Labels:
-
Apache Ranger
01-17-2017
01:28 PM
@Bilal ArshadCan you please confirm that atlas is running and bind to the correct port, also share the /var/log/atlas/application.log - netstat -tulnp | grep -i 21000 - ps -ef | grep -i atlas
... View more
- « Previous
-
- 1
- 2
- Next »