Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
935 | 06-25-2018 04:01 PM | |
1690 | 05-09-2018 05:36 PM | |
464 | 03-16-2018 04:11 PM | |
2316 | 05-18-2017 12:42 PM | |
1852 | 03-28-2017 06:42 PM |
04-24-2017
04:44 PM
Hi @Kent Baxley, Looks like the doc is missing the plan for backing up the VERSION file.
... View more
04-13-2017
05:44 PM
2 Kudos
delete-scripts.zip Attached are scripts you can use for ORACLE, PSQL, and MYSQL for deleting users. Directions to how these scripts work are included in the script. Good practice is to make sure to backup your db before attempting to use and test these scripts.
... View more
- Find more articles tagged with:
- delete
- groups
- How-ToTutorial
- Ranger
- ranger-admin
- Script
- Security
- users
Labels:
03-28-2017
06:42 PM
Hi @Deepak Sharma, If you are using HDP version 2.5 there is a bug when using wire encryption with hive and trying to access with knox in a kerberized cluster. See https://issues.apache.org/jira/browse/KNOX-762 . You will see in the knox kerberos debug log that knox is trying to authenticate using spengo keytab with HTTPS instead of HTTP. To resolve this issue downgrade the httpclient jar to httpclient-4.5.1.jar .on knox.
... View more
03-06-2017
09:11 PM
@badr bakkou This would probably be best answered if you submitted as a new question. Provide the gateway.log & gateway-audit.log outputs, topology, and lastly the configuration string you are using with its associated output. Best regards, David
... View more
03-06-2017
08:56 PM
@Hajime It is not mandatory for WEBHDFS to work. However, It is good practice to make this change in NN HA env. as other services like oozie use this for doing rewrites.
... View more
03-01-2017
12:40 AM
9 Kudos
1. HA provider for webhdfs is needed in your topology.
<provider>
<role>ha</role>
<name>HaProvider</name>
<enabled>true</enabled>
<param>
<name>WEBHDFS</name>
<value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
</param>
</provider>
2. The namenode service url value should contain your name service ID. (This can be found in your hdfs-default.xml under parameter dfs.internal.nameservices)
<service>
<role>NAMENODE</role>
<url>hdfs://chupa</url>
</service>
3. Make sure webhdfs url for each namenode is added in your WEBHDFS service area.
<service>
<role>WEBHDFS</role>
<url>http://chupa1.openstacklocal:50070/webhdfs</url>
<url>http://chupa2.openstacklocal:50070/webhdfs</url>
</service>
4. Here is a working topology using the knox default demo LDAP.
<topology>
<gateway>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>30</value>
</param>
<param>
<name>main.ldapRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<param>
<name>main.ldapRealm.userDnTemplate</name>
<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://chupa1.openstacklocal:33389</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>simple</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
<provider>
<role>authorization</role>
<name>XASecurePDPKnox</name>
<enabled>true</enabled>
</provider>
<provider>
<role>ha</role>
<name>HaProvider</name>
<enabled>true</enabled>
<param>
<name>WEBHDFS</name>
<value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
</param>
</provider>
</gateway>
<service>
<role>NAMENODE</role>
<url>hdfs://chupa</url>
</service>
<service>
<role>JOBTRACKER</role>
<url>rpc://chupa3.openstacklocal:8050</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://chupa1.openstacklocal:50070/webhdfs</url>
<url>http://chupa2.openstacklocal:50070/webhdfs</url>
</service>
<service>
<role>WEBHCAT</role>
<url>http://chupa2.openstacklocal:50111/templeton</url>
</service>
<service>
<role>OOZIE</role>
<url>http://chupa2.openstacklocal:11000/oozie</url>
</service>
<service>
<role>WEBHBASE</role>
<url>http://chupa1.openstacklocal:8080</url>
</service>
<service>
<role>HIVE</role>
<url>http://chupa2.openstacklocal:10001/cliservice</url>
</service>
<service>
<role>RESOURCEMANAGER</role>
<url>http://chupa3.openstacklocal:8088/ws</url>
</service>
<service>
<role>RANGERUI</role>
<url>http://chupa3.openstacklocal:6080</url>
</service>
</topology>
5. If you would like to test that it is working you can issue the following command to manually failover the cluster and test.
hdfs haadmin -failover nn1 nn2
6. Test with Knox connection string to webhdfs. curl -vik -u admin:admin-password 'https://localhost:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS'
... View more
- Find more articles tagged with:
- HA
- How-ToTutorial
- Knox
- knox-namenode-ha
- namenode
- namenode-ha
- Security
Labels:
02-27-2017
07:28 PM
The user search filter can be anything you would like to filter on further within the OUs or you can just leave it to a default setting like forexample in AD Samaccountname=* or Samaccountname={0} or in the case of openldap cn=* or cn={0}
... View more
02-10-2017
11:16 PM
2 Kudos
1. Create User [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"dav","Users/password":"pass","Users/active":"true","Users/admin":"false"}' http://localhost:8080/api/v1/users
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
* Server auth using Basic with user 'admin'
> POST /api/v1/users HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> X-Requested-By: ambari
> Content-Length: 93
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
HTTP/1.1 201 Created 2. Create Group [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Groups/group_name":"davgroup"}' http://localhost:8080/api/v1/groups
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
* Server auth using Basic with user 'admin'
> POST /api/v1/groups HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> X-Requested-By: ambari
> Content-Length: 32
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
HTTP/1.1 201 Created 3. Map user to Group [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"MemberInfo/user_name":"dav", "MemberInfo/group_name":"davgroup"}' http://localhost:8080/api/v1/groups/davgroup/members
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
* Server auth using Basic with user 'admin'
> POST /api/v1/groups/davgroup/members HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> X-Requested-By: ambari
> Content-Length: 66
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
HTTP/1.1 201 Created
... View more
- Find more articles tagged with:
- Ambari
- groups
- How-ToTutorial
- solutions
- users
Labels:
02-09-2017
10:17 PM
@Saikiran Parepally Article created for future reference. https://community.hortonworks.com/content/kbentry/82544/how-to-create-ad-principal-accounts-using-openldap.html
... View more
02-09-2017
08:16 PM
12 Kudos
AD admins may be busy and you may happen to know the ambari admin principal for enabling Kerberos. How would you go about adding a principal for AD with this information and add it to your kerberos keytab? Below is one way to do it. Thanks to @Robert Levas for collaborating with me on this.
1. Create LDIF file ad_user.ldif. (Make sure there are no spaces at the ends of each of these lines)
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
distinguishedName: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
cn: HTTP/loadbalancerhost
userAccountControl: 514
accountExpires: 0
userPrincipalName: HTTP/loadbalancerhost@HOST.COM
servicePrincipalName: HTTP/loadbalancerhost
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=host,DC=com
changetype: modify
replace: unicodePwd
unicodePwd::IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: modify
replace: userAccountControl
userAccountControl: 66048
Do not have spaces at the ends of the above lines or you will get an error like the following:
ldap_add: No such attribute (16)
additional info: 00000057: LdapErr: DSID-0C090D8A, comment: Error in attribute conversion operation, data 0, v2580
2. Create unicode Password for the above principal with the password hadoopRocks123!. Replace unicodePWD field in step 1:
[root@host1 ~]# echo -n '"hadoopRocks123!"' | iconv -f UTF8 -t UTF16LE | base64 -w 0
IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
3. Add the account to AD:
[root@host1 ~]# ldapadd -x -H ldaps://sme-2012-ad.support.com:636 -D "test1@host.com" -W -f add_user.ldif
Enter LDAP Password:
adding new entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=com"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
4. Test the account with kinit:
[root@host1 ~]# kinit HTTP/loadbalancerhost@HOST.COM
Password for HTTP/loadbalancerhost@HOST.COM:
[root@host1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/loadbalancerhost@HOST.COM
Valid starting Expires Service principal
02/09/17 19:02:33 02/10/17 19:02:33 krbtgt/HOST.COM@HOST.COM
renew until 02/09/17 19:02:33
5. Take it one step further if you need to add the principal to a keytab file
[root@host1 ~]# ktutil
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes128-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes256-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e arcfour-hmac-md5-exp
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des3-cbc-sha1
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des-cbc-md5
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: write_kt spenego.service.keytab
ktutil: exit
[root@host1 ~]# klist -ket spenego.service.keytab
Keytab name: FILE:lb.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes128-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes256-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (arcfour-hmac-exp)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des3-cbc-sha1)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des-cbc-md5)
... View more
- Find more articles tagged with:
- Ambari
- How-ToTutorial
- Kerberos
- Ranger
- Security
Labels:
02-08-2017
01:34 AM
2 Kudos
This is normal behavior as you have to authenticate with kerberos on your client and negotiate kerberos with your browser. On the local host, where web browser is running:
1. Configure krb5.conf with correct hadoop domain info.
2. Run kinit to obtain a kerberos ticket from KDC server.
3. Configure web browser to use the kerberos ticket, which depends on different type of web browser (More details on http://crimsonfu.github.io/2012/06/22/kerberos-browser.html).
... View more
01-19-2017
07:34 PM
Hi @Qi Wang This should help you to learn by example when it comes to configuring your knox groups and how it relates to your ldapsearch. See Sample 4 specifically https://cwiki.apache.org/confluence/display/KNOX/Using+Apache+Knox+with+ActiveDirectory Hope this helps.
... View more
01-18-2017
10:07 PM
3 Kudos
Hi @Qi Wang, This may also help where I have answered a similar question. https://community.hortonworks.com/questions/74501/how-knox-pass-the-user-information-to-ranger.html
... View more
01-17-2017
11:50 PM
Hi @Prasanta Sahoo, Please review the following lab: https://github.com/emaxwell-hw/Atlas-Ranger-Tag-Security
... View more
01-16-2017
07:05 PM
Hi @divya, Only two UIs are certified for HDP 2.5.x which are Ambari and Ranger. It just so happens the oozie ui does not work out of the box with knox at this point as I have tried. The UI will come up but only give you the oozie ui header. The urls within the framed page will have issues so you will not see your jobs, etc.
... View more
01-09-2017
09:56 PM
Hi @Jasper, Just move the offending keystore out of the knox keystores directory. For example, on my server. /var/lib/knox/data-2.5.0.0-1133/security/keystores/ and restart knox.
... View more
01-07-2017
01:24 AM
2 Kudos
@Michael Young, This describes the different use cases and why you would want to have that set to false or true. http://hortonworks.com/blog/best-practices-for-hive-authorization-using-apache-ranger-in-hdp-2-2/
... View more
01-07-2017
12:22 AM
I wrote about this a while back not really a bug anymore but more of default setting that needs to be adjusted based on data being passed. see https://community.hortonworks.com/articles/33875/knox-queries-fail-quickly-with-a-500-error.html
... View more
01-06-2017
08:25 PM
Hi @Dinesh Das, [root@sandbox ~]# beeline
Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000:
Enter password for jdbc:hive2://localhost:10000:
Connected to: Apache Hive (version 1.2.1000.2.5.0.0-1245)
Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| foodmart |
| xademo |
+----------------+--+
3 rows selected (0.107 seconds)
0: jdbc:hive2://localhost:10000> use xademo;
No rows affected (0.038 seconds)
0: jdbc:hive2://localhost:10000> show tables;
+----------------------+--+
| tab_name |
+----------------------+--+
| call_detail_records |
| customer_details |
| recharge_details |
+----------------------+--+
3 rows selected (0.126 seconds)
0: jdbc:hive2://localhost:10000> select * from customer_details;
+--------------------------------+------------------------+----------------------------+--------------------------+---------------------------+------------------------+--------------------------+--+
| customer_details.phone_number | customer_details.plan | customer_details.rec_date | customer_details.status | customer_details.balance | customer_details.imei | customer_details.region |
+--------------------------------+------------------------+----------------------------+--------------------------+---------------------------+------------------------+--------------------------+--+
| PHONE_NUM | PLAN | REC_DATE | STAUS | BALANCE | IMEI | REGION |
| 5553947406 | 6290 | 20130328 | 31 | 0 | 012565003040464 | R06 |
| 7622112093 | 2316 | 20120625 | 21 | 28 | 359896046017644 | R02 |
| 5092111043 | 6389 | 20120610 | 21 | 293 | 012974008373781 | R06 |
| 9392254909 | 4002 | 20110611 | 21 | 178 | 357004045763373 | R04 |
| 7783343634 | 2276 | 20121214 | 31 | 0 | 354643051707734 | R02 |
| 5534292073 | 6389 | 20120223 | 31 | 83 | 359896040168211 | R06 |
| 9227087403 | 4096 | 20081010 | 31 | 35 | 356927012514661 | R04 |
| 9226203167 | 4060 | 20060527 | 21 | 450 | 010589003666377 | R04 |
| 9221154050 | 4107 | 20100811 | 31 | 3 | 358665019197977 | R04 |
| 7434378689 | 2002 | 20100824 | 32 | 0 | 355000035507467 | R02 |
| 7482285225 | 2285 | 20121130 | 31 | 52 | 352212033537106 | R02 |
| 7788070992 | 2002 | 20101214 | 31 | 17 | 355384047786453 | R02 |
| 7982300380 | 2276 | 20121223 | 31 | 0 | 357210042170690 | R02 |
| 9790142194 | 4012 | 20090406 | 32 | 0 | 011336002603947 | R04 |
| 9226907642 | 4060 | 20070312 | 21 | 93 | | R04 |
| 9559185951 | 4276 | 20120924 | 31 | 70 | 355474044841748 | R04 |
| 7582299877 | 2389 | 20120610 | 31 | 33 | 356718041114890 | R02 |
| 9422182637 | 4060 | 20041201 | 31 | 117 | 010440007339548 | R04 |
| 9291295360 | 4324 | 20120614 | 21 | 172 | 353401045408575 | R04 |
| 9452775584 | 4325 | 20120206 | 21 | 185 | 011580001310174 | R04 |
| 9752115932 | 4002 | 20100526 | 21 | 542 | 358835035011748 | R04 |
| 9882259323 | 4012 | 20101201 | 31 | 68 | 012239002633949 | R04 |
| 5922179682 | 4282 | 20130316 | 21 | 110 | 354073042162536 | R04 |
| 7482229731 | 2368 | 20090110 | 21 | 142 | 357611009852016 | R02 |
| 7984779801 | 2276 | 20121223 | 31 | 14 | 013342000049057 | R02 |
| 9562127711 | 4107 | 20110627 | 21 | 387 | | R04 |
| 9882297052 | 4316 | 20120107 | 21 | 97 | 357118045463485 | R04 |
| 9227677218 | 4286 | 20130121 | 21 | 70 | 354894017753268 | R04 |
| 9002245938 | 4277 | 20130131 | 31 | 0 | 013111000005512 | R04 |
+--------------------------------+------------------------+----------------------------+--------------------------+---------------------------+------------------------+--------------------------+--+
30 rows selected (0.204 seconds)
... View more
01-06-2017
04:36 PM
Hi, @Bruce Perez, If you go here http://hortonworks.com/training/ There is a getting started link in the right hand side of the page. Within here there is a Contact Us phone number 1.408.675.0983. http://hortonworks.com/training/certification/
... View more
01-05-2017
05:49 PM
1 Kudo
At this point you know it is SSL certificate issue based on the error. You need to find where the problem is. Maybe the certificate you exported is not correct. Try validating it and exporting it again. Below is a tool I use for troubleshooting. Maybe run through this and if still doesn't work download sslpoke and troubleshoot.
openssl s_client -connect <knox hostname>:<8443><<<'' | openssl x509 -out ./ssl.cert
keytool -import -alias <knoxhostname> -file ./ssl.cert -keystore usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert
SYMPTOM:
Sometimes a Hadoop service may fail to connect to SSL and give an error like this:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
ROOT CAUSE:
Here are the possible reason:
1. The JVM used by the Hadoop service is not using the correct certificate or the correct truststore
2. The certificate is not signed by the trusted CA
3. The Java trusted CA certificate chain is not available.
HOW TO DEBUG:
Here are the steps to narrow down the problem with the SSL certificate:
STEP 1: Analyze the SSL connection to the SSL enabled service (either Ranger or Knox in this case) by using SSLPoke utility. Download it from:
https://confluence.atlassian.com/download/attachments/117455/SSLPoke.java
It's a simple Java program which connects to server:port over SSL and tries to write a byte and returns the response.
STEP 2: Compile and run the SSLPoke like this:
$ java SSLPoke <SSL-service-hostname> <SSL-service-port>
If there is an error, it should print similar error as shown above.
Next, test the connection with the truststore that the Hadoop service is supposed to be using.
STEP 3: If the Hadoop service is using the default JRE truststore then import the SSL-service certificate and run the SSLPoke again
3a. Extract the certificate from the SSL service:
$ openssl s_client -connect <SSL-service-hostname>:<SSL-service-port><<<'' | openssl x509 -out ./ssl.cert
3b. import certificate into JRE default truststore:
$ keytool -import -alias <SSL-service-hostname> -file ./ssl.cert -keystore $JAVA_HOME/jre/lib/security/cacerts
3b. Run the SSLPoke again.
$ java SSLPoke <SSL-service-hostname> <SSL-service-port>
STEP 4: If the Hadoop service is using a custom SSL truststore then specify the truststore in SSLPoke command and test the connection:
$ java SSLPoke -Djavax.net.ssl.trustStore=/path/to/truststore <SSL-service-hostname> <SSL-service-port>
The STEP 3b and 4 commands would show some error incase there is any problem. Workup on those clues to reach to the actual problem and fix that.
STEP 5: For the correct SSL setup, the SSLPoke would show success message:
$ java SSLPoke -Djavax.net.ssl.trustStore=/path/to/truststore <SSL-service-hostname> <SSL-service-port>
Successfully connected
So keep playing until SSL connection is successful. Then replicate the similar successful settings for the Hadoop service and it should work.
... View more
01-05-2017
04:30 PM
3 Kudos
That stack trace error in beeline seems clear to me:
org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
To fix you need to know what java beeline is using. Do a ps -ef | grep beeline to see. Like so.. root@chupa1 ~]# ps -ef | grep beeline
root 4239 4217 2 16:20 pts/0 00:00:01 /usr/jdk64/jdk1.8.0_77/bin/java -Xmx1024m -Dhdp.version=2.5.0.0-1133 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.5.0.0-1133 -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1133/hadoop -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.5.0.0-1133/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1133/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx1024m -Djava.util.logging.config.file=/usr/hdp/2.5.0.0-1133/hive/conf/parquet-logging.properties -Dlog4j.configuration=beeline-log4j.properties -Dhadoop.security[.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.5.0.0-1133/hive/lib/hive-beeline-1.2.1000.2.5.0.0-1133.jar org.apache.hive.beeline.BeeLine Based on my output I would import my knox trust certificate to the cacert that my beeline client is using in my case /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert The import now would look like keytool -import-trustcacerts -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert -storepass changeit -noprompt -alias knox -file /tmp/knox.crt and restart beeline client to move past the error. The issue here is definitely with SSL.
... View more
01-04-2017
09:57 PM
1 Kudo
I was unable to find a way around this. The NameNode just gives admin rights to the system user name which started its process, by default hdfs user. You can also give others superuser permissions with dfs.permissions.superusergroup and dfs.cluster.administrators. It seems ranger doesn't disallow superusers unless in the case of KMS encrypted zones. In terms of KMS I can see there is a blacklist mechanism to disallow superuser. I don't think there is a similar feature for Ranger itself.
... View more
01-04-2017
06:07 PM
I see. So you want to remove privileges from Hadoop Super User? I think there are ways around this but not recommended. Let me do a bit more research on this.
... View more
01-03-2017
11:53 PM
Hi @priyanshu bindal, Check your ticket cache to see if it is still valid. The other thing you can do is if you are done with a ticket just use kdestroy to clean up the ticket cache in your script. [root@chupa1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: dav@CHUPA.COM
Valid starting Expires Service principal
01/03/17 23:46:33 01/04/17 23:46:33 krbtgt/CHUPA.COM@CHUPA.COM
renew until 01/03/17 23:46:33
... View more
01-03-2017
11:29 PM
@Avijeet Dash I don't necessarily agree with your statement. Maybe I am missing something here. "even if a directory is protected for a user/group - hdfs can always access it." If you have kerberos enabled and you set the permissions of the directories correctly even hdfs user wouldn't have access unless specified in ranger. http://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/
... View more
01-03-2017
11:00 PM
1 Kudo
ldapsearch helps me resolve 100% of all ldap cases. ldapsearch -x -h <LDAP_SERVER_HOST> -p <PORT> -D "<bind_DN>" -w <bind_PASSWORD> -b "BASE_DN" "USER_SEARCH_FILTER=USERNAME"
... View more
01-03-2017
10:46 PM
You may find this useful as well for future. https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients Also, be advised ranger only works with hiveserver2.
... View more
12-27-2016
07:51 PM
2 Kudos
PROBLEM: Some users may be associated to many groups causing a very long list of groups to be passed through the Rest APIs headers in Ranger and KMS. ERROR: error log from /var/log/ranger/kms/kms.log 2016-12-01 14:04:12,048 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
2016-12-01 14:04:12,074 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
ROOT CAUSE: Rest API calls being passed with large headersizes in this case users with large amount of groups that exceed the webservers maxHttpHeaderSize. SOLUTION:
In Ambari go to Ranger Admin->config->Advanced Tab->Custom ranger-admin-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and then go to Ranger KMS->config->Advanced Tab->Custom ranger-kms-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and restart all Ranger and Ranger KMS services.
... View more
- Find more articles tagged with:
- Issue Resolution
- Ranger
- ranger-admin
- ranger-kms
- Security
Labels:
12-26-2016
04:21 PM
It seems that HDFS is not synching your groups. Try restarting the cluster to see if that helps.
... View more