Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2159 | 06-25-2018 04:01 PM | |
3714 | 05-09-2018 05:36 PM | |
1205 | 03-16-2018 04:11 PM | |
4704 | 05-18-2017 12:42 PM | |
3755 | 03-28-2017 06:42 PM |
05-17-2019
06:50 PM
1 Kudo
Customers have asked me about wanting to review ranger audit archive logs stored on HDFS as the UI only shows the Last 90 days of data using Solr infra. I decided to approach the problem using Zeppelin/Spark for a fun example. 1. Prerequisites - Zeppelin and Spark2 installed on your system. As well as ranger with ranger audit logs being stored in HDFS. Create a policy in ranger for HDFS to allow your zeppelin user to read and execute recursively for /ranger/audit directory. 2. Create your notebook in Zeppelin and create some code like the following example: %spark2.spark
// --Specify service and date if you wish
//val path = "/ranger/audit/hdfs/20190513/*.log"
// --Be brave and map the whole enchilada
val path = "/ranger/audit/*/*/*.log"
// --read in the json and drop any malformed json
val rauditDF = spark.read.option("mode", "DROPMALFORMED").json(path)
// --print the schema to review and show me top 20 lines.
rauditDF.printSchema()
rauditDF.show(20,false)
// --Do some spark sql on the data and look for denials
println("sparksql--------------------")
rauditDF.createOrReplaceTempView(viewName="audit")
var readAccessDF = spark.sql("SELECT reqUser, repo, access, action, evtTime, policy, resource, reason, enforcer, result FROM audit where result='0'").withColumn("new_result", when(col("result") === "1","Allowed").otherwise("Denied"))
readAccessDF.show(20,false) 3. Output should look something like path: String = /ranger/audit/*/*/*.log
rauditDF: org.apache.spark.sql.DataFrame = [access: string, action: string ... 23 more fields]
root
|-- access: string (nullable = true)
|-- action: string (nullable = true)
|-- additional_info: string (nullable = true)
|-- agentHost: string (nullable = true)
|-- cliIP: string (nullable = true)
|-- cliType: string (nullable = true)
|-- cluster_name: string (nullable = true)
|-- enforcer: string (nullable = true)
|-- event_count: long (nullable = true)
|-- event_dur_ms: long (nullable = true)
|-- evtTime: string (nullable = true)
|-- id: string (nullable = true)
|-- logType: string (nullable = true)
|-- policy: long (nullable = true)
|-- reason: string (nullable = true)
|-- repo: string (nullable = true)
|-- repoType: long (nullable = true)
|-- reqData: string (nullable = true)
|-- reqUser: string (nullable = true)
|-- resType: string (nullable = true)
|-- resource: string (nullable = true)
|-- result: long (nullable = true)
|-- seq_num: long (nullable = true)
|-- sess: string (nullable = true)
|-- tags: array (nullable = true)
| |-- element: string (containsNull = true)
sql
readAccessDF: org.apache.spark.sql.DataFrame = [reqUser: string, repo: string ... 9 more fields]
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+
|reqUser |repo |access |action |evtTime |policy|resource |reason |enforcer |result|new_result|
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+
|dav |c3205_hadoop|READ_EXECUTE|execute|2019-05-13 22:07:23.971|-1 |/ranger/audit/hdfs |/ranger/audit/hdfs |hadoop-acl|0 |Denied |
|zeppelin|c3205_hadoop|READ_EXECUTE|execute|2019-05-13 22:10:47.288|-1 |/ranger/audit/hdfs |/ranger/audit/hdfs |hadoop-acl|0 |Denied |
|dav |c3205_hadoop|EXECUTE |execute|2019-05-13 23:57:49.410|-1 |/ranger/audit/hiveServer2/20190513/hiveServer2_ranger_audit_c3205-node3.hwx.local.log|/ranger/audit/hiveServer2/20190513|hadoop-acl|0 |Denied |
|zeppelin|c3205_hive |USE |_any |2019-05-13 23:42:50.643|-1 |null |null |ranger-acl|0 |Denied |
|zeppelin|c3205_hive |USE |_any |2019-05-13 23:43:08.732|-1 |default |null |ranger-acl|0 |Denied |
|dav |c3205_hive |USE |_any |2019-05-13 23:48:37.603|-1 |null |null |ranger-acl|0 |Denied |
+--------+------------+------------+-------+-----------------------+------+-------------------------------------------------------------------------------------+----------------------------------+----------+------+----------+ 4. You can proceed to run sql as well on the audit view information using sql if you so desire. 5. You may need to fine tune your spark interpreter in zeppelin to meet your needs like SPARK_DRIVER_MEMORY, spark.executor.cores, spark.executor.instances, & spark.executor.memory. It helped to see what was happening by tailing the zeppelin log for spark. tailf zeppelin-interpreter-spark2-spark-zeppelin-cluster1.hwx.log
... View more
- Find more articles tagged with:
- How-ToTutorial
- Ranger
- ranger-audit
- Security
- spark-sql
- spark2
- zeppelin-notebook
Labels:
12-10-2018
04:57 PM
If you are on a newer version of Ambari I recommend you take advantage of using FreeIPA option. (Basically AD for Redhat)
... View more
10-15-2018
03:23 PM
My pleasure! @Jasper
... View more
10-15-2018
03:23 PM
My pleasure! @Jasper
... View more
08-03-2018
03:42 PM
@V_A nOn the unsecure ldap 389 port tcpdump the traffic when login fails and post it here for me to look at the error.
... View more
08-03-2018
03:34 PM
@David Liu Hi, Check that sssd returns group on id username on all nodes. Then check your core-site.xml make sure to remove any references to ldap or other configs that aren't default in this area. It is possible to map multiple providers here so it may be a configuration issue with core-site.xml. Make sure you also restart full MR, and YARN as well as HDFS.
... View more
06-25-2018
04:01 PM
@Pankaj Singh Try this one. https://github.com/emaxwell-hw/Atlas-Ranger-Tag-Security
... View more
05-23-2018
02:22 AM
2 Kudos
Example topology for kerberos auth and hive: [root@groot1 hive]# cat /etc/knox/2.6.0.3-8/0/topologies/kerberos.xml <topology>
<gateway>
<provider>
<role>authentication</role>
<name>HadoopAuth</name>
<enabled>true</enabled>
<param>
<name>config.prefix</name>
<value>hadoop.auth.config</value>
</param>
<param>
<name>hadoop.auth.config.signature.secret</name>
<value>hadoop12345!</value>
</param>
<param>
<name>hadoop.auth.config.type</name>
<value>kerberos</value>
</param>
<param>
<name>hadoop.auth.config.simple.anonymous.allowed</name>
<value>false</value>
</param>
<param>
<name>hadoop.auth.config.token.validity</name>
<value>1800</value>
</param>
<param>
<name>hadoop.auth.config.cookie.domain</name>
<value>openstacklocal</value>
</param>
<param>
<name>hadoop.auth.config.cookie.path</name>
<value>/gateway/kerberos/hive</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.principal</name>
<value>HTTP/groot1.openstacklocal@SUPPORT.COM</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.keytab</name>
<value>/etc/security/keytabs/spnego.service.keytab</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.name.rules</name>
<value>DEFAULT</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
<provider>
<role>authorization</role>
<name>AclsAuthz</name>
<enabled>false</enabled>
</provider>
</gateway>
<service>
<role>NAMENODE</role>
<url>hdfs://groot1.openstacklocal:8020</url>
</service>
<service>
<role>JOBTRACKER</role>
<url>rpc://master2.openstacklocal:8050</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://groot1.openstacklocal:50070/webhdfs</url>
</service>
<service>
<role>WEBHCAT</role>
<url>http://master2.openstacklocal:50111/templeton</url>
</service>
<service>
<role>HIVE</role>
<url>http://groot1.openstacklocal:10001/cliservice</url>
</service>
<service>
<role>RESOURCEMANAGER</role>
<url>http://master2.openstacklocal:8088/ws</url>
</service>
</topology> Example of how to use it: (Don't forget to have knox proxy settings for core-site.xml and if you run into troubles restart both hive and knox.) [root@groot1 hive]# kinit dvillarreal
Password for dvillarreal@SUPPORT.COM:
[root@groot1 hive]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: dvillarreal@SUPPORT.COM
Valid starting Expires Service principal
05/22/18 22:54:43 05/23/18 08:54:40 krbtgt/SUPPORT.COM@SUPPORT.COM
renew until 05/29/18 22:54:43
[root@groot1 hive]# beeline
Beeline version 1.2.1000.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://groot1.openstacklocal:8443/;ssl=true;principal=HTTP/_HOST@SUPPORT.COM;transportMode=http;httpPath=gateway/kerberos/hive
Connecting to jdbc:hive2://groot1.openstacklocal:8443/;ssl=true;principal=HTTP/_HOST@SUPPORT.COM;transportMode=http;httpPath=gateway/kerberos/hive
Enter username for jdbc:hive2://groot1.openstacklocal:8443/;ssl=true;principal=HTTP/_HOST@SUPPORT.COM;transportMode=http;httpPath=gateway/kerberos/hive:
Enter password for jdbc:hive2://groot1.openstacklocal:8443/;ssl=true;principal=HTTP/_HOST@SUPPORT.COM;transportMode=http;httpPath=gateway/kerberos/hive:
Connected to: Apache Hive (version 1.2.1000.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://groot1.openstacklocal:8443/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
+----------------+--+
1 row selected (8.169 seconds)
... View more
Labels:
05-16-2018
08:27 PM
This Ranger Jira is actually dependent on a hive Jira in order for it to be fixed.
... View more
05-09-2018
05:36 PM
1 Kudo
@Bhushan Kandalkar When I looked at your original error from the knox gateway.log I see: dispatching request: http://hadmgrndcc03-3.test.org:10001/cliservice?user.name=guest org.apache.http.NoHttpResponseException: The gateway-audit.log should show this as well upon dispatch there is a problem knox communicating to hive. This tells me that you never changed your KNOX topology to include the hive service links with the correct protocol from http to https. Make sure knox knows that you should be using https vs http in the topology to communicate to hive.
... View more
04-24-2018
09:04 PM
Keep in mind, Taxonomy feature is still in Tech Preview (ie. not recommended for production use) and will not be supported. Taxonomy will be production ready or GA in HDP 3.0
... View more
03-22-2018
05:23 PM
Hi @Kyunam Kim, I am not sure of your full requirements, but why not use webhdfs through the knox gateway? It may make things much easier for you. https://knox.apache.org/books/knox-1-0-0/user-guide.html#WebHDFS Best regards, David
... View more
03-19-2018
08:27 PM
Hi @subbiram Padala, Not sure if this fits for your use case but I found this tutorial that may shed light on the process. https://hortonworks.com/hadoop-tutorial/searching-data-solr/
... View more
03-19-2018
06:43 PM
Hi @V_A n, I think there is a problem with your configuration for HDP. It looks like it is failing on the code to get user roles from shiro.ini. /***
* Get user roles from shiro.ini for Zeppelin LdapRealm
* @param r
* @return
*/
public List<String> getRolesList(LdapRealm r) {
List<String> roleList = new ArrayList<>();
Map<String, String> roles = r.getListRoles();
if (roles != null) {
Iterator it = roles.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry) it.next();
if (LOG.isDebugEnabled()) {
LOG.debug("RoleKeyValue: " + pair.getKey() +
" = " + pair.getValue());
}
roleList.add((String) pair.getKey());
}
}
return roleList;
} Please check the following has been done correctly for HDP. https://community.hortonworks.com/articles/105169/hdp-26-configuring-zeppelin-for-active-directory-u.html
... View more
03-16-2018
06:08 PM
Hi @ay mu, If you are on the newer releases of ranger you could just create a new ranger database in mysql and import the policies over using the import/export policy feature. https://cwiki.apache.org/confluence/display/RANGER/User+Guide+For+Import-Export
... View more
03-16-2018
04:37 PM
Hi @Karl Fredrickson Check this out. https://github.com/rajkrrsingh/HiveServer2JDBCSample/blob/master/src/main/java/HiveJDBCOverHTTP.java Hope it helps.
... View more
03-16-2018
04:11 PM
1 Kudo
Hi @Dominique De Vito, Yes it does. If you then click on the action 'Update' It will show you what was updated. In this example below. Users was empty and now it has the user HTTP. Hope this is what you were looking for.
... View more
03-16-2018
03:56 PM
Hi @datta ningole, Here is an example how you would do this. https://github.com/HortonworksUniversity/Security_Labs/blob/master/HDP-2.6-AD.md#setup-ados-integration-via-sssd
... View more
03-14-2018
05:57 PM
Repo Description For fun and related to a case
I did this simple example of how to use ranger API. This particular script just
shows how to search for a user and confirm the id of the user you would like to
change role of. (Admin/User) To learn more about the
ranger api: http://ranger.apache.org/apidocs/index.html http://ranger.apache.org/apidocs/ui/index.html Repo Info Github Repo URL https://github.com/davhortonworks/changerangerrole/blob/master/chrole.py Github account name davhortonworks/changerangerrole/blob/master Repo name chrole.py
... View more
- Find more articles tagged with:
- python
- Ranger
- ranger-admin
- roles
- sample-aps
- Security
Labels:
11-15-2017
10:50 PM
Can you have multiple nested groups? Say you have some nested groups in ou=groups and ou=groups2 If you set the base to have ou=groups,dc=test,dc=com;ou=groups2,dc=test,dc=test,dc=com will it pick up the hierarchy levels for each ou?
... View more
08-29-2017
10:50 PM
This may not work depending on your version because of this bug https://issues.apache.org/jira/browse/RANGER-1554 Should be fixed in HDP 2.6.1.
... View more
05-18-2017
12:42 PM
Hi @Qi Wang, Yes, it should be fixed in the next maintenance release. In the meantime, please use the workaround provided. Thanks,
... View more
05-18-2017
12:42 PM
Hi @Nicola Marangoni, You may want to take this up on another thread as AD implementation does work out of the box. Make sure you use the AD option. Some things to check. 1. Does your server trust the AD SSL certificate being presented to allow for ldaps. 2. To troubleshoot use port 389 and tcpdump the request and response. 3. ldapsearch is your friend to confirm all parameters.
... View more
05-18-2017
12:42 PM
5 Kudos
Hi @Qi Wang There is an issue with ldap and atlas settings where the ldap settings can get truncated at commas. You can tcpdump port 389 to verify. Possible solution is Basically, escape every comma in the LDAP auth config page, but, it works now. Here are the relevant parameters I have set: atlas.authentication.method.ldap.base.dn=cn=users\,cn=accounts\,dc=field\,dc=hortonworks\,dc=com atlas.authentication.method.ldap.bind.dn=uid=ldapconnect\,cn=users\,cn=accounts\,dc=field\,dc=hortonworks\,dc=com atlas.authentication.method.ldap.bind.password=password atlas.authentication.method.ldap.default.role=ROLE_USER atlas.authentication.method.ldap.groupRoleAttribute=cn atlas.authentication.method.ldap.groupSearchBase=cn=groups\,cn=accounts\,dc=field\,dc=hortonworks\,dc=com atlas.authentication.method.ldap.groupSearchFilter=(member=uid{0}\,cn=users\,cn=accounts\,dc=field\,dc=hortonworks\,dc=com)atlas.authentication.method.ldap.referral=ignore atlas.authentication.method.ldap.type=ldap atlas.authentication.method.ldap.url=ldap://ldapserver.local:389 atlas.authentication.method.ldap.user.searchfilter=(uid={0}) atlas.authentication.method.ldap.userDNpattern=uid={0}\,cn=users\,cn=accounts\,dc=field\,dc=hortonworks\,dc=com
... View more
05-12-2017
03:49 PM
Most likely something wrong with your load balancer configuration. Here is an example : http://knox.apache.org/books/knox-0-12-0/user-guide.html#High+Availability+with+Apache+HTTP+Server+++mod_proxy+++mod_proxy_balancer
... View more
05-12-2017
06:23 AM
Here it is working on my server. Maybe this may shed some light. [root@groot1 topologies]# curl -ivk -u dvillarreal 'https://localhost:8443/gateway/default/webhdfs/v1/zone_encr/?op=LISTSTATUS' Enter host password for user 'dvillarreal': * About to connect() to localhost port 8443 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * warning: ignoring value of ssl.verifyhost * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=groot1.openstacklocal,OU=Test,O=Hadoop,L=Test,ST=Test,C=US * start date: Apr 27 20:08:39 2017 GMT * expire date: Apr 27 20:08:39 2018 GMT * common name: groot1.openstacklocal * issuer: CN=groot1.openstacklocal,OU=Test,O=Hadoop,L=Test,ST=Test,C=US * Server auth using Basic with user 'dvillarreal' > GET /gateway/default/webhdfs/v1/zone_encr/?op=LISTSTATUS HTTP/1.1 > Authorization: Basic ZHZpbGxhcnJlYWw6aGFkb29wMTIzNDUh > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:8443 > Accept: */* > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Date: Thu, 11 May 2017 23:33:08 GMT Date: Thu, 11 May 2017 23:33:08 GMT < Set-Cookie: JSESSIONID=ayrhe6eilreq1egldq6hc6uu4;Path=/gateway/default;Secure;HttpOnly Set-Cookie: JSESSIONID=ayrhe6eilreq1egldq6hc6uu4;Path=/gateway/default;Secure;HttpOnly < Expires: Thu, 01 Jan 1970 00:00:00 GMT Expires: Thu, 01 Jan 1970 00:00:00 GMT < Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:09 GMT Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:09 GMT < Cache-Control: no-cache Cache-Control: no-cache < Expires: Thu, 11 May 2017 23:33:09 GMT Expires: Thu, 11 May 2017 23:33:09 GMT < Date: Thu, 11 May 2017 23:33:09 GMT Date: Thu, 11 May 2017 23:33:09 GMT < Pragma: no-cache Pragma: no-cache < Expires: Thu, 11 May 2017 23:33:09 GMT Expires: Thu, 11 May 2017 23:33:09 GMT < Date: Thu, 11 May 2017 23:33:09 GMT Date: Thu, 11 May 2017 23:33:09 GMT < Pragma: no-cache Pragma: no-cache < X-FRAME-OPTIONS: SAMEORIGIN X-FRAME-OPTIONS: SAMEORIGIN < Content-Type: application/json; charset=UTF-8 Content-Type: application/json; charset=UTF-8 < Server: Jetty(6.1.26.hwx) Server: Jetty(6.1.26.hwx) < Content-Length: 1419 Content-Length: 1419 < {"FileStatuses":{"FileStatus":[{"accessTime":0,"blockSize":0,"childrenNum":0,"encBit":true,"fileId":37375,"group":"hdfs","length":0,"modificationTime":1494457290326,"owner":"hdfs","pathSuffix":".Trash","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},{"accessTime":1494520647770,"blockSize":134217728,"childrenNum":0,"encBit":true,"fileId":38869,"group":"hdfs","length":0,"modificationTime":1494520647770,"owner":"dvillarreal","pathSuffix":"Screen Shot 2017-04-28 at 3.40.25 PM.png","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},{"accessTime":1494521001429,"blockSize":134217728,"childrenNum":0,"encBit":true,"fileId":38879,"group":"hdfs","length":52624,"modificationTime":1494521002174,"owner":"dvillarreal","pathSuffix":"Screen Shot 2017-04-28 at 3.43.55 PM.png","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},{"accessTime":1494519148636,"blockSize":134217728,"childrenNum":0,"encBit":true,"fileId":38834,"group":"hdfs","length":0,"modificationTime":1494* Connection #0 to host localhost left intact * Closing connection #0 519148636,"owner":"dvillarreal","pathSuffix":"mag7.jpg","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},{"accessTime":1494457615918,"blockSize":134217728,"childrenNum":0,"encBit":true,"fileId":37384,"group":"hdfs","length":28,"modificationTime":1494457616450,"owner":"dvillarreal","pathSuffix":"test.txt","permission":"644","replication":3,"storagePolicy":0,"type":"FILE [root@groot1 topologies]# curl -ivLk -u dvillarreal 'https://localhost:8443/gateway/default/webhdfs/v1/zone_encr/test.txt?op=OPEN' Enter host password for user 'dvillarreal': * About to connect() to localhost port 8443 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * warning: ignoring value of ssl.verifyhost * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=groot1.openstacklocal,OU=Test,O=Hadoop,L=Test,ST=Test,C=US * start date: Apr 27 20:08:39 2017 GMT * expire date: Apr 27 20:08:39 2018 GMT * common name: groot1.openstacklocal * issuer: CN=groot1.openstacklocal,OU=Test,O=Hadoop,L=Test,ST=Test,C=US * Server auth using Basic with user 'dvillarreal' > GET /gateway/default/webhdfs/v1/zone_encr/test.txt?op=OPEN HTTP/1.1 > Authorization: Basic ZHZpbGxhcnJlYWw6aGFkb29wMTIzNDUh > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:8443 > Accept: */* > < HTTP/1.1 307 Temporary Redirect HTTP/1.1 307 Temporary Redirect < Date: Thu, 11 May 2017 23:33:53 GMT Date: Thu, 11 May 2017 23:33:53 GMT < Set-Cookie: JSESSIONID=cmi3xz9vz22aztv01vy60fje;Path=/gateway/default;Secure;HttpOnly Set-Cookie: JSESSIONID=cmi3xz9vz22aztv01vy60fje;Path=/gateway/default;Secure;HttpOnly < Expires: Thu, 01 Jan 1970 00:00:00 GMT Expires: Thu, 01 Jan 1970 00:00:00 GMT < Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:54 GMT Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:54 GMT < Cache-Control: no-cache Cache-Control: no-cache < Expires: Thu, 11 May 2017 23:33:54 GMT Expires: Thu, 11 May 2017 23:33:54 GMT < Date: Thu, 11 May 2017 23:33:54 GMT Date: Thu, 11 May 2017 23:33:54 GMT < Pragma: no-cache Pragma: no-cache < Expires: Thu, 11 May 2017 23:33:54 GMT Expires: Thu, 11 May 2017 23:33:54 GMT < Date: Thu, 11 May 2017 23:33:54 GMT Date: Thu, 11 May 2017 23:33:54 GMT < Pragma: no-cache Pragma: no-cache < X-FRAME-OPTIONS: SAMEORIGIN X-FRAME-OPTIONS: SAMEORIGIN < Content-Type: application/octet-stream Content-Type: application/octet-stream < Location: https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/zone_encr/test.txt?_=AAAACAAAABAAAAEQGeIZcVX_mUa9HOTHUCBIZ7b_iNiz924O7UBVlI3ZPZeYbhzO8LW0SVhKlX3zUvhuykF7TisStFefLuYdHNSYIOmsoeB3MPAoVIGUvnTHmlEBko2aDm6r7OvYm0Ytkk4WhS5Xtn-TSWPt5OGYsa-trOUi2OyTY5lkGw0Iy-iKrlSV_svcO_0hX53C73NnCCMBJYVV8NiCHUX0qpv7IzcYZGCS2wyiuwwNnhPexTUpJcCZhT40MjMCCDauex_uaUdgYHPZKFH1BzFtIJKWYUbGKe_KiB4goWEyVqF2NHj0R58-jLcYewuPClbmquX3A8VHt9O2YSw-_WWtb_nIsTx1HMYFC5iPajfqsk9FKxtSTBtpP0dkhrjBnWfa15chNgfrZIaZ5cr5Er4 Location: https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/zone_encr/test.txt?_=AAAACAAAABAAAAEQGeIZcVX_mUa9HOTHUCBIZ7b_iNiz924O7UBVlI3ZPZeYbhzO8LW0SVhKlX3zUvhuykF7TisStFefLuYdHNSYIOmsoeB3MPAoVIGUvnTHmlEBko2aDm6r7OvYm0Ytkk4WhS5Xtn-TSWPt5OGYsa-trOUi2OyTY5lkGw0Iy-iKrlSV_svcO_0hX53C73NnCCMBJYVV8NiCHUX0qpv7IzcYZGCS2wyiuwwNnhPexTUpJcCZhT40MjMCCDauex_uaUdgYHPZKFH1BzFtIJKWYUbGKe_KiB4goWEyVqF2NHj0R58-jLcYewuPClbmquX3A8VHt9O2YSw-_WWtb_nIsTx1HMYFC5iPajfqsk9FKxtSTBtpP0dkhrjBnWfa15chNgfrZIaZ5cr5Er4 < Server: Jetty(6.1.26.hwx) Server: Jetty(6.1.26.hwx) < Content-Length: 0 Content-Length: 0 < * Connection #0 to host localhost left intact * Issue another request to this URL: 'https://localhost:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/zone_encr/test.txt?_=AAAACAAAABAAAAEQGeIZcVX_mUa9HOTHUCBIZ7b_iNiz924O7UBVlI3ZPZeYbhzO8LW0SVhKlX3zUvhuykF7TisStFefLuYdHNSYIOmsoeB3MPAoVIGUvnTHmlEBko2aDm6r7OvYm0Ytkk4WhS5Xtn-TSWPt5OGYsa-trOUi2OyTY5lkGw0Iy-iKrlSV_svcO_0hX53C73NnCCMBJYVV8NiCHUX0qpv7IzcYZGCS2wyiuwwNnhPexTUpJcCZhT40MjMCCDauex_uaUdgYHPZKFH1BzFtIJKWYUbGKe_KiB4goWEyVqF2NHj0R58-jLcYewuPClbmquX3A8VHt9O2YSw-_WWtb_nIsTx1HMYFC5iPajfqsk9FKxtSTBtpP0dkhrjBnWfa15chNgfrZIaZ5cr5Er4' * Re-using existing connection! (#0) with host localhost * Connected to localhost (127.0.0.1) port 8443 (#0) * Server auth using Basic with user 'dvillarreal' > GET /gateway/default/webhdfs/data/v1/webhdfs/v1/zone_encr/test.txt?_=AAAACAAAABAAAAEQGeIZcVX_mUa9HOTHUCBIZ7b_iNiz924O7UBVlI3ZPZeYbhzO8LW0SVhKlX3zUvhuykF7TisStFefLuYdHNSYIOmsoeB3MPAoVIGUvnTHmlEBko2aDm6r7OvYm0Ytkk4WhS5Xtn-TSWPt5OGYsa-trOUi2OyTY5lkGw0Iy-iKrlSV_svcO_0hX53C73NnCCMBJYVV8NiCHUX0qpv7IzcYZGCS2wyiuwwNnhPexTUpJcCZhT40MjMCCDauex_uaUdgYHPZKFH1BzFtIJKWYUbGKe_KiB4goWEyVqF2NHj0R58-jLcYewuPClbmquX3A8VHt9O2YSw-_WWtb_nIsTx1HMYFC5iPajfqsk9FKxtSTBtpP0dkhrjBnWfa15chNgfrZIaZ5cr5Er4 HTTP/1.1 > Authorization: Basic ZHZpbGxhcnJlYWw6aGFkb29wMTIzNDUh > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:8443 > Accept: */* > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Date: Thu, 11 May 2017 23:33:54 GMT Date: Thu, 11 May 2017 23:33:54 GMT < Set-Cookie: JSESSIONID=vhf31ukoxmintdfe2h5ekg2y;Path=/gateway/default;Secure;HttpOnly Set-Cookie: JSESSIONID=vhf31ukoxmintdfe2h5ekg2y;Path=/gateway/default;Secure;HttpOnly < Expires: Thu, 01 Jan 1970 00:00:00 GMT Expires: Thu, 01 Jan 1970 00:00:00 GMT < Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:54 GMT Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Wed, 10-May-2017 23:33:54 GMT < Access-Control-Allow-Methods: GET Access-Control-Allow-Methods: GET < Access-Control-Allow-Origin: * Access-Control-Allow-Origin: * < Content-Type: application/octet-stream Content-Type: application/octet-stream < Connection: close Connection: close < Server: Jetty(9.2.15.v20160210) Server: Jetty(9.2.15.v20160210) < This is a test for enc zone * Closing connection #0
... View more
05-11-2017
11:21 PM
Maybe I am not understanding the scenario completely but I don't think this is possible.
... View more
05-11-2017
11:16 PM
1 Kudo
One issue with this is that if you upgrade your version of Java in the future it may overwrite your cacerts file. Best practice would be to put in a separate location. Forexample, /etc/ssl/ranger/trust.jks, etc.
... View more
05-11-2017
10:59 PM
Each service has its own default service account that ambari will create and these service can belong to hadoop group. You can customize these if needed. https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-administration/content/defining_service_users_and_groups_for_a_hdp_2x_stack.html Is ec2-user the user account that ambari service runs as? Maybe that is what you did?
... View more
04-24-2017
04:44 PM
Hi @Kent Baxley, Looks like the doc is missing the plan for backing up the VERSION file.
... View more