Member since 
    
	
		
		
		10-20-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                92
            
            
                Posts
            
        
                79
            
            
                Kudos Received
            
        
                9
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5873 | 06-25-2018 04:01 PM | |
| 8459 | 05-09-2018 05:36 PM | |
| 3264 | 03-16-2018 04:11 PM | |
| 9103 | 05-18-2017 12:42 PM | |
| 8118 | 03-28-2017 06:42 PM | 
			
    
	
		
		
		03-06-2017
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Hajime  It is not mandatory for WEBHDFS to work.  However,  It is good practice to make this change in NN HA env. as other services  like oozie use this for doing rewrites. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-01-2017
	
		
		12:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		9 Kudos
		
	
				
		
	
		
					
							 
	1.  HA provider for webhdfs is needed in your topology. 
 <provider>
   <role>ha</role>
   <name>HaProvider</name>
   <enabled>true</enabled>
   <param>
      <name>WEBHDFS</name>
      <value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
   </param>
</provider>
  
	2.  The namenode service url value should contain your name service ID.  (This can be found in your hdfs-default.xml under parameter dfs.internal.nameservices) 
 <service>
   <role>NAMENODE</role>
   <url>hdfs://chupa</url>
</service>
  
	3. Make sure webhdfs url for each namenode is added in your WEBHDFS service area. 
 <service>
    <role>WEBHDFS</role>
    <url>http://chupa1.openstacklocal:50070/webhdfs</url>
    <url>http://chupa2.openstacklocal:50070/webhdfs</url>
</service>
  
	4.  Here is a working topology using the knox default demo LDAP. 
 <topology>
    <gateway>
        <provider>
            <role>authentication</role>
            <name>ShiroProvider</name>
            <enabled>true</enabled>
            <param>
                <name>sessionTimeout</name>
                <value>30</value>
            </param>
            <param>
                <name>main.ldapRealm</name>
                <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
            </param>
            <param>
                <name>main.ldapRealm.userDnTemplate</name>
                <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
            </param>
            <param>
                <name>main.ldapRealm.contextFactory.url</name>
                <value>ldap://chupa1.openstacklocal:33389</value>
            </param>
            <param>
                <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
                <value>simple</value>
            </param>
            <param>
                <name>urls./**</name>
                <value>authcBasic</value>
            </param>
        </provider>
        <provider>
            <role>identity-assertion</role>
            <name>Default</name>
            <enabled>true</enabled>
        </provider>
        <provider>
            <role>authorization</role>
            <name>XASecurePDPKnox</name>
            <enabled>true</enabled>
        </provider>
        <provider>
            <role>ha</role>
            <name>HaProvider</name>
            <enabled>true</enabled>
            <param>
                <name>WEBHDFS</name>
                <value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
            </param>
        </provider>
    </gateway>
    <service>
        <role>NAMENODE</role>
        <url>hdfs://chupa</url>
    </service>
    <service>
        <role>JOBTRACKER</role>
        <url>rpc://chupa3.openstacklocal:8050</url>
    </service>
    <service>
        <role>WEBHDFS</role>
        <url>http://chupa1.openstacklocal:50070/webhdfs</url>
        <url>http://chupa2.openstacklocal:50070/webhdfs</url>
    </service>
    <service>
        <role>WEBHCAT</role>
        <url>http://chupa2.openstacklocal:50111/templeton</url>
    </service>
    <service>
        <role>OOZIE</role>
        <url>http://chupa2.openstacklocal:11000/oozie</url>
    </service>
    <service>
        <role>WEBHBASE</role>
        <url>http://chupa1.openstacklocal:8080</url>
    </service>
    <service>
        <role>HIVE</role>
        <url>http://chupa2.openstacklocal:10001/cliservice</url>
    </service>
    <service>
        <role>RESOURCEMANAGER</role>
        <url>http://chupa3.openstacklocal:8088/ws</url>
    </service>
    <service>
        <role>RANGERUI</role>
        <url>http://chupa3.openstacklocal:6080</url>
    </service>
</topology>
  
	5.  If you would like to test that it is working you can issue the following command to manually failover the cluster and test. 
 hdfs haadmin -failover nn1 nn2
  6. Test with Knox connection string to webhdfs.  curl -vik -u admin:admin-password 'https://localhost:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS' 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-27-2017
	
		
		07:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The user search filter can be anything you would like to filter on further within the OUs or you can just leave it to a default setting like forexample in AD Samaccountname=* or Samaccountname={0} or in the case of openldap cn=* or cn={0} 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		11:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 1. Create User   [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"dav","Users/password":"pass","Users/active":"true","Users/admin":"false"}' http://localhost:8080/api/v1/users 
* About to connect() to localhost port 8080 (#0) 
* Trying ::1... connected 
* Connected to localhost (::1) port 8080 (#0) 
* Server auth using Basic with user 'admin' 
> POST /api/v1/users HTTP/1.1 
> Authorization: Basic YWRtaW46YWRtaW4= 
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 
> Host: localhost:8080 
> Accept: */* 
> X-Requested-By: ambari 
> Content-Length: 93 
> Content-Type: application/x-www-form-urlencoded 
> 
< HTTP/1.1 201 Created 
HTTP/1.1 201 Created   2. Create Group   [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Groups/group_name":"davgroup"}' http://localhost:8080/api/v1/groups 
* About to connect() to localhost port 8080 (#0) 
* Trying ::1... connected 
* Connected to localhost (::1) port 8080 (#0) 
* Server auth using Basic with user 'admin' 
> POST /api/v1/groups HTTP/1.1 
> Authorization: Basic YWRtaW46YWRtaW4= 
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 
> Host: localhost:8080 
> Accept: */* 
> X-Requested-By: ambari 
> Content-Length: 32 
> Content-Type: application/x-www-form-urlencoded 
> 
< HTTP/1.1 201 Created 
HTTP/1.1 201 Created   3. Map user to Group   [root@chupa1 ~]# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"MemberInfo/user_name":"dav", "MemberInfo/group_name":"davgroup"}' http://localhost:8080/api/v1/groups/davgroup/members 
* About to connect() to localhost port 8080 (#0) 
* Trying ::1... connected 
* Connected to localhost (::1) port 8080 (#0) 
* Server auth using Basic with user 'admin' 
> POST /api/v1/groups/davgroup/members HTTP/1.1 
> Authorization: Basic YWRtaW46YWRtaW4= 
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 
> Host: localhost:8080 
> Accept: */* 
> X-Requested-By: ambari 
> Content-Length: 66 
> Content-Type: application/x-www-form-urlencoded 
> 
< HTTP/1.1 201 Created 
HTTP/1.1 201 Created 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-09-2017
	
		
		10:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Saikiran Parepally Article created for future reference.  https://community.hortonworks.com/content/kbentry/82544/how-to-create-ad-principal-accounts-using-openldap.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		08:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		13 Kudos
		
	
				
		
	
		
					
							 AD admins may be busy and you may happen to know the ambari admin principal for enabling Kerberos. How would you go about adding a principal for AD with this information and add it to your kerberos keytab? Below is one way to do it. Thanks to @Robert Levas for collaborating with me on this. 
 1. Create LDIF file ad_user.ldif. (Make sure there are no spaces at the ends of each of these lines) 
 dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
distinguishedName: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
cn: HTTP/loadbalancerhost
userAccountControl: 514
accountExpires: 0
userPrincipalName: HTTP/loadbalancerhost@HOST.COM
servicePrincipalName: HTTP/loadbalancerhost
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=host,DC=com
changetype: modify
replace: unicodePwd
unicodePwd::IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: modify
replace: userAccountControl
userAccountControl: 66048
 
 Do not have spaces at the ends of the above lines or you will get an error like the following: 
 ldap_add: No such attribute (16)
      additional info: 00000057: LdapErr: DSID-0C090D8A, comment: Error in attribute conversion operation, data 0, v2580
 
 2. Create unicode Password for the above principal with the password hadoopRocks123!. Replace unicodePWD field in step 1: 
 [root@host1 ~]# echo -n '"hadoopRocks123!"' | iconv -f UTF8 -t UTF16LE | base64 -w 0
IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA== 
 3. Add the account to AD: 
 [root@host1 ~]# ldapadd -x -H ldaps://sme-2012-ad.support.com:636 -D "test1@host.com" -W -f add_user.ldif
Enter LDAP Password: 
adding new entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=com"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM" 
 4. Test the account with kinit: 
 [root@host1 ~]# kinit HTTP/loadbalancerhost@HOST.COM
Password for HTTP/loadbalancerhost@HOST.COM: 
[root@host1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/loadbalancerhost@HOST.COM
Valid starting     Expires            Service principal
02/09/17 19:02:33  02/10/17 19:02:33  krbtgt/HOST.COM@HOST.COM
	renew until 02/09/17 19:02:33 
 5. Take it one step further if you need to add the principal to a keytab file 
 [root@host1 ~]# ktutil
ktutil:  add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes128-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil:  add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes256-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil:  add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e arcfour-hmac-md5-exp
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil:  add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des3-cbc-sha1
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil:  add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des-cbc-md5
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil:  write_kt spenego.service.keytab
ktutil:  exit 
 [root@host1 ~]# klist -ket spenego.service.keytab
Keytab name: FILE:lb.service.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes128-cts-hmac-sha1-96)
   1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes256-cts-hmac-sha1-96)
   1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (arcfour-hmac-exp)
   1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des3-cbc-sha1)
   1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des-cbc-md5) 
   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-19-2017
	
		
		07:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Qi Wang  This should help you to learn by example when it comes to configuring your knox groups and how it relates to your ldapsearch. See Sample 4 specifically   https://cwiki.apache.org/confluence/display/KNOX/Using+Apache+Knox+with+ActiveDirectory  Hope this helps.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-18-2017
	
		
		10:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Hi @Qi Wang,  This may also help where I have answered a similar question.  https://community.hortonworks.com/questions/74501/how-knox-pass-the-user-information-to-ranger.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-07-2017
	
		
		12:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I wrote about this a while back not really a bug anymore but more of default setting that needs to be adjusted based on data being passed.  see https://community.hortonworks.com/articles/33875/knox-queries-fail-quickly-with-a-500-error.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-05-2017
	
		
		05:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 
	At this point you know it is SSL certificate issue based on the error.  You need to find where the problem is.  Maybe the certificate you exported is not correct. Try validating it  and exporting it again.  Below is a tool I use for troubleshooting.  Maybe run through this and if still doesn't work download sslpoke and troubleshoot.
 
 
openssl s_client -connect <knox hostname>:<8443><<<'' | openssl x509 -out ./ssl.cert
 
 
keytool -import -alias <knoxhostname> -file ./ssl.cert -keystore usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert
 
 
	SYMPTOM:
	Sometimes a Hadoop service may fail to connect to SSL and give an error like this:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 
 
	ROOT CAUSE:
	Here are the possible reason:
1. The JVM used by the Hadoop service is not using the correct certificate or the correct truststore
2. The certificate is not signed by the trusted CA
3. The Java trusted CA certificate chain is not available.
 
 
	HOW TO DEBUG:
	Here are the steps to narrow down the problem with the SSL certificate:
 
 
	STEP 1: Analyze the SSL connection to the SSL enabled service (either Ranger or Knox in this case) by using SSLPoke utility. Download it from:
	https://confluence.atlassian.com/download/attachments/117455/SSLPoke.java
	It's a simple Java program which connects to server:port over SSL and tries to write a byte and returns the response.
 
 
	STEP 2: Compile and run the SSLPoke like this:
$ java SSLPoke <SSL-service-hostname> <SSL-service-port>
If there is an error, it should print similar error as shown above.
 
 
	Next, test the connection with the truststore that the Hadoop service is supposed to be using.
 
 
	STEP 3: If the Hadoop service is using the default JRE truststore then import the SSL-service certificate and run the SSLPoke again
3a. Extract the certificate from the SSL service:
$ openssl s_client -connect <SSL-service-hostname>:<SSL-service-port><<<'' | openssl x509 -out ./ssl.cert
 
 
	3b. import certificate into JRE default truststore:
$ keytool -import -alias <SSL-service-hostname> -file ./ssl.cert -keystore $JAVA_HOME/jre/lib/security/cacerts
 
 
	3b. Run the SSLPoke again.
$ java SSLPoke <SSL-service-hostname> <SSL-service-port>
STEP 4: If the Hadoop service is using a custom SSL truststore then specify the truststore in SSLPoke command and test the connection:
$ java SSLPoke -Djavax.net.ssl.trustStore=/path/to/truststore <SSL-service-hostname> <SSL-service-port>
The STEP 3b and 4 commands would show some error incase there is any problem. Workup on those clues to reach to the actual problem and fix that.
 
 
	STEP 5: For the correct SSL setup, the SSLPoke would show success message:
$ java SSLPoke -Djavax.net.ssl.trustStore=/path/to/truststore <SSL-service-hostname> <SSL-service-port>
Successfully connected
 
 
	So keep playing until SSL connection is successful. Then replicate the similar successful settings for the Hadoop service and it should work.
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













