Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar
Contributor

1. Now let's configure Knox to use our AD for authentication. Replace below content in Ambari > Knox > Config > Advanced topology

<topology>
        <gateway>
            <provider>
                <role>authentication</role>
                <name>ShiroProvider</name>
                <enabled>true</enabled>
                <param>
                    <name>sessionTimeout</name>
                    <value>30</value>
                </param>
                <param>
                    <name>main.ldapRealm</name>
                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value> 
                </param>
            <!-- changes for AD/user sync -->
                <param>
                    <name>main.ldapContextFactory</name>
                    <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
                </param>
       <!-- main.ldapRealm.contextFactory needs to be placed before other main.ldapRealm.context 
            Factory* entries  
       -->
                <param>
                    <name>main.ldapRealm.contextFactory</name>
                    <value>$ldapContextFactory</value>
                </param>
            <!-- AD url -->
                <param>
                    <name>main.ldapRealm.contextFactory.url</name>
                    <value>ldap://ad01.lab.hortonworks.net:389</value> 
                </param>
            <!-- system user -->
                <param>
                    <name>main.ldapRealm.contextFactory.systemUsername</name>
                    <value>cn=ldap-reader,ou=ServiceUsers,dc=lab,dc=hortonworks,dc=net</value>
                </param>


            <!-- pass in the password using the alias created earlier -->
                <param>
                    <name>main.ldapRealm.contextFactory.systemPassword</name>
                    <value>${ALIAS=knoxLdapSystemPassword}</value>
                </param>
                <param>
                    <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
                    <value>simple</value>
                </param>
                <param>
                    <name>urls./**</name>
                    <value>authcBasic</value> 
                </param>
            <!--  AD groups of users to allow -->
                <param>
                    <name>main.ldapRealm.searchBase</name>
                    <value>ou=CorpUsers,dc=lab,dc=hortonworks,dc=net</value>
                </param>
                <param>
                    <name>main.ldapRealm.userObjectClass</name>
                    <value>person</value>
                </param>
                <param>
                    <name>main.ldapRealm.userSearchAttributeName</name>
                    <value>sAMAccountName</value>
                </param>
            <!-- changes needed for group sync-->
                <param>
                    <name>main.ldapRealm.authorizationEnabled</name>
                    <value>true</value>
                </param>
                <param>
                    <name>main.ldapRealm.groupSearchBase</name>
                    <value>ou=CorpUsers,dc=lab,dc=hortonworks,dc=net</value>
                </param>
                <param>
                    <name>main.ldapRealm.groupObjectClass</name>
                    <value>group</value>
                </param>
                <param>
                    <name>main.ldapRealm.groupIdAttribute</name>
                    <value>cn</value>
                </param>
            </provider>
            <provider>
                <role>identity-assertion</role>
                <name>Default</name>
                <enabled>true</enabled>
                </provider>
            <provider>
                <role>authorization</role>
                <name>XASecurePDPKnox</name>
                <enabled>true</enabled>
            </provider>
        <!-- Knox HaProvider for Hadoop services -->
            <provider>
                 <role>ha</role>
                 <name>HaProvider</name>
                 <enabled>true</enabled>
                 <param>
                     <name>OOZIE</name>
                     <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
                 </param>
                 <param>
                     <name>HBASE</name>
                     <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
                 </param>
                 <param>
                     <name>WEBHCAT</name>
                     <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
                 </param>
                 <param>
                     <name>WEBHDFS</name>
                     <value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
                 </param>
                 <param>
                    <name>HIVE</name>
                    <value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true;zookeeperEnsemble=machine1:2181,machine2:2181,machine3:2181;
                   zookeeperNamespace=hiveserver2</value>
                 </param>
            </provider>
        <!-- END Knox HaProvider for Hadoop services -->
        </gateway>

            <service>
                <role>NAMENODE</role>
                <url>hdfs://{{namenode_host}}:{{namenode_rpc_port}}</url>
            </service>
            <service>
                <role>JOBTRACKER</role>
                <url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>
            </service>
            <service>
                <role>WEBHDFS</role>
                <url>http://{{namenode_host}}:{{namenode_http_port}}/webhdfs</url>
            </service>
            <service>
                <role>WEBHCAT</role>
                <url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>
            </service>
            <service>
                <role>OOZIE</role>
                <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>
            </service>
            <service>
                <role>WEBHBASE</role>
                <url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>
            </service>
            <service>
                <role>HIVE</role>
                <url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>
            </service>
            <service>
                <role>RESOURCEMANAGER</role>
                <url>http://{{rm_host}}:{{rm_port}}/ws</url>
            </service>
    </topology>

Restart Knox gateway services using Ambari

2. Make sure Knox is configured to use CA certificates

openssl s_client -showcerts -connect knoxhostname:8443

3. Validate Topology definition

/usr/hdp/current/knox-server/bin/knoxcli.sh validate-topology --cluster default

4. Test LDAP Authentication and Authorization

/usr/hdp/current/knox-server/bin/knoxcli.sh user-auth-test [--cluster c] [--u username] [--p password] [--g] [--d]

This command will test a topology’s ability to connect, authenticate, and authorise a user with an LDAP server. The only required argument is the –cluster argument to specify the name of the topology you wish to use. Refer to http://knox.apache.org/books/knox-0-12-0/user-guide.html#LDAP+Authentication+and+Authorization for more options.

5. Test the ability to connect, bind, and authenticate with the LDAP server

 /usr/hdp/current/knox-server/bin/knoxcli.sh system-user-auth-test [--cluster c] [--d]

This command will test a given topology’s ability to connect, bind, and authenticate with the LDAP server from the settings specified in the topology file. The bind currently only will with Shiro as the authentication provider

6. Test with Knox connection string to webhdfs

curl -vik -u admin:admin-password 'https://<hostname>:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS'

7. To make Knox use Ranger authorization

Edit Advance Topology section and Change the authorization from XASecurePDPKnox to AclsAuthz. For example, change,

<provider>
    <role>authorization</role>
    <name>AclsAuthz</name>
    <enabled>true</enabled>
</provider>
        to, 
        <provider>
           <role>authorization</role>
           <name>XASecurePDPKnox</name>
           <enabled>true</enabled>
        </provider>

8. Configure Ranger Knox plugin debug logging

This Knox log setting should show you what is getting passed to RANGER from the KNOX Plugin. Modify the gateway-log4j.properties like below, restart Knox and review the ranger Knox plugin log in the file ranger.knoxagent.log

#Ranger Knox Plugin debug
    ranger.knoxagent.logger=DEBUG,console,KNOXAGENT
    ranger.knoxagent.log.file=ranger.knoxagent.log
    log4j.logger.org.apache.ranger=${ranger.knoxagent.logger}
    log4j.additivity.org.apache.ranger=false
    log4j.appender.KNOXAGENT =org.apache.log4j.DailyRollingFileAppender
    log4j.appender.KNOXAGENT.File=${app.log.dir}/${ranger.knoxagent.log.file}
    log4j.appender.KNOXAGENT.layout=org.apache.log4j.PatternLayout
    log4j.appender.KNOXAGENT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n %L
    log4j.appender.KNOXAGENT.DatePattern=.yyyy-MM-dd

11,059 Views
Comments
avatar
Expert Contributor

Hi @Kartik Ramalingam,

Regarding Step 7: KNOX

To enable Ranger Plugin :

Replace instances of AclsAuthz with XASecurePDPKnox in topology.xml


To disable Ranger Plugin :

Replace instance of XASecurePDPKnox with AclsAuthz in topology.xml

avatar
Contributor

@Pravin Bhagade thanks. updated the document.

avatar

@Kartik Ramalingam Thank you for your wonderful and helpful post!

Ranger authorization is still incorrect in the post. Ranger authorization is already enabled in initial topology. However, Step 7 must describe it to disable Ranger authorization by modifying the parameter value from XASecurePDPKnox to AclsAuthz. Also, Step 7 example needs to be corrected.

Regards,

Sakhuja