- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
HDFS GUI is giving error
Created ‎05-21-2024 05:09 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Team,
HDFS was working before kurberos enabled , noww post enabling kurberos HDFS is not working. Getting error in HDFS GUI, HDFS is running in cloudera manager.
Name node is showing, Test of whether the NameNode is in safe mode.
```
Error:
```
HTTP ERROR 502 Process information not available.
URI:STATUS:MESSAGE:SERVLET:
/cmf/process/1546341404/logs |
502 |
Process information not available. |
Spring MVC Dispatcher Servlet |
```
Created ‎05-22-2024 02:27 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
To fix the HDFS issue after enabling Kerberos, you should check the Kerberos configuration and ensure that all services are correctly authenticated. If the NameNode is in safe mode, you can try to exit it using the command hdfs dfsadmin -safemode leave.
For the HTTP ERROR 502, verify the network configuration and ensure that the Cloudera Manager services are running properly.
Created ‎12-12-2024 10:13 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@divyank The HDFS NameNode in safemode may happen due to its waiting for the DataNodes to send the block report. If that is not completed it may remain in the safemode. Ensure all the DataNodes started properly and no errors with it and connected to NameNode. You may review the NameNode logs and look for what its waiting for to exit safemode. Manually exiting the safemode may cause data loss of un reported blocks. If you have doubt, don't hesitate to contact Cloudera Support.
Created ‎12-16-2024 02:10 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Have you resolved this issue if not the issue you're encountering is common when Kerberos is enabled for HDFS, as it introduces authentication requirements that need to be properly configured. Here’s how to diagnose and resolve the problem:
1. Root Cause Analysis
When Kerberos is enabled:
- Authentication: Every interaction with HDFS now requires a Kerberos ticket.
- Misconfiguration: The HDFS service or client-side configurations may not be aligned with Kerberos requirements.
- Keytabs: Missing or improperly configured keytab files for the HDFS service or users accessing the service.
Browser Access: The HDFS Web UI may not support unauthenticated access unless explicitly configured.
2. Steps to Resolve
Step 1: Verify Kerberos Configuration
- Check the Kerberos principal and keytab file paths for HDFS in Cloudera Manager:
- Navigate to HDFS Service > Configuration.
- Look for settings like:
- hadoop.security.authentication → Should be set to Kerberos.
- dfs.namenode.kerberos.principal → Should match the principal defined in the KDC.
- dfs.namenode.keytab.file → Ensure the file exists on the NameNode and has correct permissions.
Step 2: Validate Kerberos Ticket
- Check if the HDFS service has a valid Kerberos ticket:If missing, reinitialize the ticket:Spoilerklist -kte /path/to/hdfs.keytabSpoilerkinit -kt /path/to/hdfs.keytab hdfs/<hostname>@<REALM>
- Test HDFS access from the command line:SpoilerIf you get authentication errors, the Kerberos ticket might be invalid.hdfs dfs -ls /
Step 3: Validate HDFS Web UI Access
- Post-Kerberos, accessing the HDFS Web UI (e.g., http://namenode-host:50070) often requires authentication. By default:
- Unauthenticated Access: May be blocked.
- Browser Integration: Ensure your browser is configured for Kerberos authentication or the UI is set to allow unauthenticated users.
- Enable unauthenticated access in Cloudera Manager (if needed):
- Go to HDFS Service > Configuration.
- Search for hadoop.http.authentication.type and set it to simple.
Step 4: Review Logs for Errors
- Check NameNode logs for Kerberos-related errors:Look for errors like:Spoilerless /var/log/hadoop/hdfs/hadoop-hdfs-namenode.log
- "GSSException: No valid credentials provided"
- "Principal not found in the keytab"
Step 5: Synchronize Clocks
- Kerberos is sensitive to time discrepancies. Ensure all nodes in the cluster have synchronized clocksSpoilerntpdate <NTP-server>
Step 6: Restart Services
- Restart the affected HDFS services via Cloudera Manager after making changes:
- Restart NameNode, DataNode, and HDFS services.
Test the status of HDFS
3. Confirm Resolution
Verify HDFS functionality:
- Test browsing HDFS via the CLI:Spoilerhdfs dfs -ls /
- Test browsing HDFS via the CLI:
Access the Web UI to confirm functionality:
4. Troubleshooting Tips
- If the issue persists:
Let me know how it goes or if further guidance is needed!
