Member since
03-06-2019
113
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2348 | 05-14-2024 06:44 AM | |
2538 | 05-14-2024 06:08 AM | |
927 | 11-23-2023 05:58 AM | |
1246 | 11-23-2023 05:45 AM |
06-07-2024
04:07 PM
1 Kudo
@Shelton I'm using Ubuntu 22.04 & using ODP (https://clemlabs.s3.eu-west-3.amazonaws.com/ubuntu22/odp-release/1.2.2.0-46/ODP)
... View more
05-24-2024
02:38 AM
1 Kudo
Hi @hiralal , Another link https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/hadoop_tokens.html if you would like to check out.
... View more
05-24-2024
02:34 AM
Hi @NaveenBlaze , You can get more info from https://github.com/c9n/hadoop/blob/master/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java#L196 . Notice these two lines in this method doTailEdits FSImage image = namesystem.getFSImage(); streams = editLog.selectInputStreams(lastTxnId + 1, 0, null, false); editsLoaded = image.loadEdits(streams, namesystem);
... View more
04-03-2024
09:30 AM
Hi @s198, You do not need to have hadoop file system or datanode role on the remote server. You just need to set up some hdfs gateway on the remote server and pull it using distcp. If you are using HDP or CDP, you can add the remote server as a gateway and perform distcp in the remote server. Another option is to share one of the directories in the remote server, mount it in hadoop cluster node, and perform distcp to that mounted directory.
... View more
03-22-2024
06:34 AM
Introduction: In large Hadoop clusters, efficiently managing block replication and decommissioning of DataNodes is crucial for maintaining system performance and reliability. However, updating Namenode configuration parameters to optimize these processes often requires a Namenode restart, causing downtime and potential disruptions to cluster operations. In this article, we'll explore a procedure to expedite block replication and DataNode decommissioning in HDFS without the need for a Namenode restart. Procedure: Identify Namenode Process Directory: Locate the Namenode process directory for the current active Namenode. This directory typically resides in /var/run/cloudera-scm-agent/process/ followed by a folder that looks like "###-hdfs-NAMENODE" Modify Configuration Parameters: Edit the hdfs-site.xml file in the Namenode process directory. Adjust the following parameters to the recommended values: dfs.namenode.replication.max-streams: Increase to a recommended value (e.g., 100). dfs.namenode.replication.max-streams-hard-limit: Increase to a recommended value (e.g., 200). dfs.namenode.replication.work.multiplier.per.iteration: Increase to a recommended value (e.g., 100). Apply Configuration Changes: Execute the below command to initiate the reconfiguration process #hdfs dfsadmin -reconfig namenode <namenode_address> start <namenode_address> can be found from the value of "dfs.namenode.rpc-address" from hdfs-site.xml. Verify Configuration Changes: Monitor the reconfiguration status using the command #hdfs dfsadmin -reconfig namenode <namenode_address> status Upon completion, verify that the configuration changes have been successfully applied. It would look like something as shown below: #hdfs dfsadmin -reconfig namenode namenode_hostname:8020 status Reconfiguring status for node [namenode_hostname:8020]: started at Fri Mar 22 08:15:12 UTC 2024 and finished at Fri Mar 22 08:15:12 UTC 2024. SUCCESS: Changed property dfs.namenode.replication.max-streams-hard-limit From: "40" To: "200" SUCCESS: Changed property dfs.namenode.replication.work.multiplier.per.iteration From: "10" To: "100" SUCCESS: Changed property dfs.namenode.replication.max-streams From: "20" To: "100" Revert Configuration Changes (Optional): If needed, revert to the original configuration values by repeating the above steps with the original parameter values. Conclusion: By following the outlined procedure, administrators can expedite block replication and DataNode decommissioning in HDFS without the need for a Namenode restart. This approach minimizes downtime and ensures efficient cluster management, even in environments where Namenode High Availability is not yet implemented or desired. Note: It's recommended to test configuration changes in a non-production environment before applying them to a live cluster to avoid potential disruptions. Additionally, consult the Hadoop documentation and consider any specific requirements or constraints of your cluster environment before making configuration modifications.
... View more
Labels:
01-08-2024
08:36 AM
Hi , That's you Standby Namenode (SBNN). Please verify if it's performing checkpoint ing or not. Please perform one checkpoint from Cloudera manager to get the health test clear.
... View more
12-21-2023
05:14 AM
Hi @michalLi , I have been trying this in CDP PvC now but does not seem to work . Here is the behavior i see for spark history server web ui (7.1.7.2000) TLS enabled and kerberos enabled : without keytab https://172.25.42.2:18088 works fine
TLS disabled and kerberos enabled : with/without keytab http://172.25.42.2:18088 is failing for 401 Auth in Mac OS/Chrome
... View more
12-08-2023
05:00 AM
Hi @akshaydalvi , I am not sure if you are using Cloudera Manager or not. Please confirm. Make sure your log4j.properties reflects what you are trying to change. You could add "hadoop.security.logger=ERROR,RFAS" line under the log4j safety valve. Be sure that you are using the right RollingAppender name RFA or RFAS. We generally use RFAS for SecurityLogger as shown below snippet. hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=/var/log/hadoop-hdfs/SecurityAuth-${user.name}.audit Here is how we configure the log4j safety valve in Cloudera Manager under HDFS -> Configuration @
... View more
11-28-2023
09:17 AM
1 Kudo
Thanks @Majeti indeed, when enabling krb5 debug it shows an error when connecting to port 88 despite it was open (tcp). I just opened 88 port on udp and i got it worked
... View more