Support Questions

Find answers, ask questions, and share your expertise

HDFS GUI is giving error

avatar
Contributor

Hi Team, 
HDFS was working before kurberos enabled , noww post enabling kurberos HDFS is not working. Getting error in HDFS GUI, HDFS is running in cloudera manager.

Name node is showing, Test of whether the NameNode is in safe mode.

```
Bad : This NameNode is in safe mode.

```

Error:

```

HTTP ERROR 502 Process information not available.

URI:STATUS:MESSAGE:SERVLET:

/cmf/process/1546341404/logs
502
Process information not available.
Spring MVC Dispatcher Servlet

```

3 REPLIES 3

avatar
New Contributor

Hi,

To fix the HDFS issue after enabling Kerberos, you should check the Kerberos configuration and ensure that all services are correctly authenticated. If the NameNode is in safe mode, you can try to exit it using the command hdfs dfsadmin -safemode leave.

For the HTTP ERROR 502, verify the network configuration and ensure that the Cloudera Manager services are running properly.

Best regard,
MyDestinyCard

avatar
Rising Star

@divyank The HDFS NameNode in safemode may happen due to its waiting for the DataNodes to send the block report. If that is not completed it may remain in the safemode. Ensure all the DataNodes started properly and no errors with it and connected to NameNode. You may review the NameNode logs and look for what its waiting for to exit safemode. Manually exiting the safemode may cause data loss of un reported blocks. If you have doubt, don't hesitate to contact Cloudera Support.

avatar
Master Mentor

@divyank 

Have you resolved this issue if not the issue you're encountering is common when Kerberos is enabled for HDFS, as it introduces authentication requirements that need to be properly configured. Here’s how to diagnose and resolve the problem:

1. Root Cause Analysis

When Kerberos is enabled:

  1. Authentication: Every interaction with HDFS now requires a Kerberos ticket.
  2. Misconfiguration: The HDFS service or client-side configurations may not be aligned with Kerberos requirements.
  3. Keytabs: Missing or improperly configured keytab files for the HDFS service or users accessing the service.

Browser Access: The HDFS Web UI may not support unauthenticated access unless explicitly configured.

2. Steps to Resolve

Step 1: Verify Kerberos Configuration

  • Check the Kerberos principal and keytab file paths for HDFS in Cloudera Manager:
    • Navigate to HDFS Service > Configuration.
    • Look for settings like:
      • hadoop.security.authentication → Should be set to Kerberos.
      • dfs.namenode.kerberos.principal → Should match the principal defined in the KDC.
      • dfs.namenode.keytab.file → Ensure the file exists on the NameNode and has correct permissions.

Step 2: Validate Kerberos Ticket

  • Check if the HDFS service has a valid Kerberos ticket:
    Spoiler
    klist -kte /path/to/hdfs.keytab
    If missing, reinitialize the ticket:
    Spoiler
    kinit -kt /path/to/hdfs.keytab hdfs/<hostname>@<REALM>
  • Test HDFS access from the command line:
    Spoiler
    hdfs dfs -ls /
    If you get authentication errors, the Kerberos ticket might be invalid.

 

Step 3: Validate HDFS Web UI Access

  • Post-Kerberos, accessing the HDFS Web UI (e.g., http://namenode-host:50070) often requires authentication. By default:
    • Unauthenticated Access: May be blocked.
    • Browser Integration: Ensure your browser is configured for Kerberos authentication or the UI is set to allow unauthenticated users.
  • Enable unauthenticated access in Cloudera Manager (if needed):
    • Go to HDFS Service > Configuration.
    • Search for hadoop.http.authentication.type and set it to simple.

Step 4: Review Logs for Errors

  • Check NameNode logs for Kerberos-related errors:
    Spoiler
    less /var/log/hadoop/hdfs/hadoop-hdfs-namenode.log
    Look for errors like:
    • "GSSException: No valid credentials provided"
    • "Principal not found in the keytab"

Step 5: Synchronize Clocks

  • Kerberos is sensitive to time discrepancies. Ensure all nodes in the cluster have synchronized clocks
    Spoiler
    ntpdate <NTP-server>

Step 6: Restart Services

  • Restart the affected HDFS services via Cloudera Manager after making changes:
    • Restart NameNode, DataNode, and HDFS services.

Test the status of HDFS

Spoiler
hdfs dfsadmin -report

3. Confirm Resolution

  • Verify HDFS functionality:

    • Test browsing HDFS via the CLI:
      Spoiler
      hdfs dfs -ls /
 

Access the Web UI to confirm functionality:

Spoiler
http://<namenode-host>:50070
If HDFS is working via CLI but not in the Web UI, revisit the Web UI settings in Cloudera Manager to allow browser access or configure browser Kerberos support.

4. Troubleshooting Tips

  • If the issue persists:
    • Check the Kerberos ticket validity with:
      Spoiler
      klist
    • Use the following commands to troubleshoot connectivity:
      Spoiler
      hdfs dfs -mkdir /test hdfs dfs -put <local-file> /test

Let me know how it goes or if further guidance is needed!