Community Articles

Find and share helpful community-sourced technical articles.
Labels (2)
avatar

1. HA provider for webhdfs is needed in your topology.

<provider>
   <role>ha</role>
   <name>HaProvider</name>
   <enabled>true</enabled>
   <param>
      <name>WEBHDFS</name>
      <value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
   </param>
</provider>

2. The namenode service url value should contain your name service ID. (This can be found in your hdfs-default.xml under parameter dfs.internal.nameservices)

<service>
   <role>NAMENODE</role>
   <url>hdfs://chupa</url>
</service>

3. Make sure webhdfs url for each namenode is added in your WEBHDFS service area.

<service>
    <role>WEBHDFS</role>
    <url>http://chupa1.openstacklocal:50070/webhdfs</url>
    <url>http://chupa2.openstacklocal:50070/webhdfs</url>
</service>

4. Here is a working topology using the knox default demo LDAP.

<topology>
    <gateway>
        <provider>
            <role>authentication</role>
            <name>ShiroProvider</name>
            <enabled>true</enabled>
            <param>
                <name>sessionTimeout</name>
                <value>30</value>
            </param>
            <param>
                <name>main.ldapRealm</name>
                <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
            </param>
            <param>
                <name>main.ldapRealm.userDnTemplate</name>
                <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
            </param>
            <param>
                <name>main.ldapRealm.contextFactory.url</name>
                <value>ldap://chupa1.openstacklocal:33389</value>
            </param>
            <param>
                <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
                <value>simple</value>
            </param>
            <param>
                <name>urls./**</name>
                <value>authcBasic</value>
            </param>
        </provider>
        <provider>
            <role>identity-assertion</role>
            <name>Default</name>
            <enabled>true</enabled>
        </provider>
        <provider>
            <role>authorization</role>
            <name>XASecurePDPKnox</name>
            <enabled>true</enabled>
        </provider>
        <provider>
            <role>ha</role>
            <name>HaProvider</name>
            <enabled>true</enabled>
            <param>
                <name>WEBHDFS</name>
                <value>maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true</value>
            </param>
        </provider>
    </gateway>
    <service>
        <role>NAMENODE</role>
        <url>hdfs://chupa</url>
    </service>
    <service>
        <role>JOBTRACKER</role>
        <url>rpc://chupa3.openstacklocal:8050</url>
    </service>
    <service>
        <role>WEBHDFS</role>
        <url>http://chupa1.openstacklocal:50070/webhdfs</url>
        <url>http://chupa2.openstacklocal:50070/webhdfs</url>
    </service>
    <service>
        <role>WEBHCAT</role>
        <url>http://chupa2.openstacklocal:50111/templeton</url>
    </service>
    <service>
        <role>OOZIE</role>
        <url>http://chupa2.openstacklocal:11000/oozie</url>
    </service>
    <service>
        <role>WEBHBASE</role>
        <url>http://chupa1.openstacklocal:8080</url>
    </service>
    <service>
        <role>HIVE</role>
        <url>http://chupa2.openstacklocal:10001/cliservice</url>
    </service>
    <service>
        <role>RESOURCEMANAGER</role>
        <url>http://chupa3.openstacklocal:8088/ws</url>
    </service>
    <service>
        <role>RANGERUI</role>
        <url>http://chupa3.openstacklocal:6080</url>
    </service>
</topology>

5. If you would like to test that it is working you can issue the following command to manually failover the cluster and test.

hdfs haadmin -failover nn1 nn2

6. Test with Knox connection string to webhdfs.

curl -vik -u admin:admin-password 'https://localhost:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS'
5,759 Views
Comments
avatar
Contributor

Good info -- thanks David

avatar

Hi @dvillarreal

I'm just wondering if I need to use a namenode service ID for NAMENODE role to use webHDFS?

avatar
Explorer

Hi, i have problem with knox when i call webhdfs he returned a crypted result :

{"sub":null,"aud":null,"code":"eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJwMDgwMzM0IiwiaXNzIjoiS05PWFNTTyJ9.X8HojHZ_wdQ8h_osOw0p_qRaWKmVLSJKwdhKwdjjOGQwB5DJy5D5JB-49gEvfDWcPNFnKgqsdUrzFcVYGforRxRuVR8b91yL4T_EPwDeN4vlPr5HKgfvPeL2zudR0l7x82G8m5yx09veuwGkDAs6y0GJfY4JTmQgmIS-wRwqlUxjxK7GT6Ktvft7ciwrQny00qSwrrO-RunBbBugPDFvGjqgiufyMpLAqTG58iS5rcKghYS_mHKWIdcvGdNCzCFURvDKr8gqZeN9hj6QqLnjHsP0gmUJ5YzvoJtEVMxoxMy8w7f9KSo7BwPkHjknpa7yFEltXDUvWgDpjdFcn_TPfw","iss":"KNOXSSO","exp":null}

i think the ssl cert is not valid but i can't fix it ?

avatar

@Hajime It is not mandatory for WEBHDFS to work. However, It is good practice to make this change in NN HA env. as other services like oozie use this for doing rewrites.

avatar

@badr bakkou This would probably be best answered if you submitted as a new question. Provide the gateway.log & gateway-audit.log outputs, topology, and lastly the configuration string you are using with its associated output. Best regards, David