Member since
01-03-2017
181
Posts
44
Kudos Received
24
Solutions
12-21-2017
12:24 AM
1 Kudo
To secure
the Spark Thrift server first we need to change the mode from binary to http
then secure the channel with the certificates. Login to Ambari-> Spark(2)-> Configs -> Custom
spark-hive-site-override: Set the following parameters : hive.server2.transport.mode : http
hive.server2.thrift.http.port : 10015 / 10016 ( in case of spark 2)
hive.server2.http.endpoint : cliservice #Enabling the SSL mode hive.server2.use.SSL : true
hive.server2.keystore.path : </path/to/your/keystore/jks>
hive.server2.keystore.password : <keystorepassword> in case of
server certs are not available process to create self-signed certs (from Hive
Wiki page) Setting
up SSL with self-signed certificates Use the
following steps to create and verify self-signed SSL certificates for use with
HiveServer2:
Create the self-signed
certificate and add it to a keystore file using: keytool -genkey -alias example.com
-keyalg RSA -keystore keystore.jks -keysize 2048 Ensure the name used in the
self signed certificate matches the hostname where Thrift server will run.
List the keystore entries to
verify that the certificate was added. Note that a keystore can contain
multiple such certificates: keytool
-list -keystore keystore.jks
Export this certificate from
keystore.jks to a certificate file: keytool -export
-alias example.com -file example.com.crt -keystore
keystore.jks
Add this certificate to the
client's truststore to establish trust: keytool -import -trustcacerts -alias example.com -file example.com.crt
-keystore truststore.jks
Verify that the certificate
exists in truststore.jks: keytool
-list -keystore truststore.jks
Then start Spark Thrift server,
use spark-sql form spark bin or try to connect with beeline using: jdbc:hive2://<host>:<port>/<database>;ssl=true;sslTrustStore=<path-to-truststore>;trustStorePassword=<truststore-password>
... View more
Labels:
09-08-2017
08:56 AM
3 Kudos
This article will help to configure the ranger audit logs to be written into flat file system. some users don't want to use solr to reduce the hardware and software footprints, such cases it will help to write and debug, at the same time this can coexists with solr NiFi is log consolidation done by Logback, hence we need to make the following changes to logback configuration. To enable the ranger audits : in Advanced-nifi-ranger-audit section make the flowing parameters values to, xasecure.audit.destination.log4j=true
xasecure.audit.destination.log4j.logger=ranger.audit To capture the logs generated by the logger, configure the logback (same as nifi-app module logger). In Advanced nifi-node-logback-env at add the following content logback.xml template <appender name="RANGER_AUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/ranger_nifi_audit_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<maxHistory>30</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<logger name="ranger.audit" level="INFO" additivity="false">
<appender-ref ref="RANGER_AUDIT"/>
</logger>
sample output: sample output:
[centos@projecthdfm1 nifi]$ cat ranger_nifi_audit.log
2017-09-08 03:37:47,475 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1] ranger.audit {"repoType":10,"repo":"hdf_clstr_nifi","reqUser":"aaaaaaaa","evtTime":"2017-09-08 03:37:46.699","access":"READ","resource":"/flow","resType":"nifi-resource","action":"READ","result":1,"policy":1,"enforcer":"ranger-acl","cliIP":"999.999.999.999","agentHost":"aaaaaa.bbbbb.example.com","logType":"RangerAudit","id":"0efc4a0d-f634-42c0-9616-5d8298a92892-0","seq_num":1,"event_count":1,"event_dur_ms":0,"tags":[]}
2017-09-08 03:38:41,443 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1] ranger.audit {"repoType":10,"repo":"hdf_clstr_nifi","reqUser":"admin","evtTime":"2017-09-08 03:38:39.121","access":"READ","resource":"/flow","resType":"nifi-resource","action":"READ","result":1,"policy":1,"enforcer":"ranger-acl","cliIP":"999.999.999.999","agentHost":"aaaaa.bbbbb.example.com","logType":"RangerAudit","id":"0efc4a0d-f634-42c0-9616-5d8298a92892-1","seq_num":3,"event_count":1,"event_dur_ms":0,"tags":[]}
2017-09-08 03:49:26,549 INFO [org.apache.ranger.audit.queue.AuditBatchQueue1] ranger.audit {"repoType":10,"repo":"hdf_clstr_nifi","reqUser":"someotheruser","evtTime":"2017-09-08 03:49:25.942","access":"READ","resource":"/flow","resType":"nifi-resource","action":"READ","result":0,"policy":-1,"enforcer":"ranger-acl","cliIP":"999.999.999.999","agentHost":"xxxxx.yyyy.example.com","logType":"RangerAudit","id":"0efc4a0d-f634-42c0-9616-5d8298a92892-2","seq_num":5,"event_count":1,"event_dur_ms":0,"tags":[]}
*host names and IP address masked
... View more