Support Questions

Find answers, ask questions, and share your expertise

HDFS Checkpoint Status Errors

avatar
Contributor

I am getting the following Checkpoint Status error and wondering if someone has an idea of how I can solve this? Brand new cluster not in use yet. Clouder Manager v7.13.1, Runtime v7.3.1 on RHEL9.

Bad : The filesystem checkpoint is 12 hour(s), 41 minute(s) old. This is 1,269.03% of the configured checkpoint period of 1 hour(s). Critical threshold: 400.00%. 7,501 transactions have occurred since the last filesystem checkpoint. This is 0.75% of the configured checkpoint transaction target of 1,000,000.

Role Log:

11:34:31.289 AMINFOFSNamesystem
Roll Edit Log from 192.168.158.2
11:34:31.289 AMINFOFSEditLog
Rolling edit logs
11:34:31.289 AMINFOFSEditLog
Ending log segment 95836, 95842
11:34:31.290 AMINFOFSEditLog
Number of transactions: 8 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 2 Number of syncs: 6 SyncTimes(ms): 5 
11:34:31.290 AMINFOFSEditLog
Number of transactions: 8 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 2 Number of syncs: 7 SyncTimes(ms): 5 
11:34:31.292 AMINFOFileJournalManager
Finalizing edits file /opt/dfs/nn/current/edits_inprogress_0000000000000095836 -> /opt/dfs/nn/current/edits_0000000000000095836-0000000000000095843
11:34:31.292 AMINFOFSEditLog
Starting log segment at 95844
11:34:44.142 AMINFOBlockPlacementPolicy
Not enough replicas was chosen. Reason:{NO_REQUIRED_STORAGE_TYPE=1}
11:34:44.142 AMINFOBlockPlacementPolicy
Not enough replicas was chosen. Reason:{NO_REQUIRED_STORAGE_TYPE=1}

Stdout:

Tue Jul 15 10:51:23 PM CDT 2025
JAVA_HOME=/usr/lib/jvm/java-openjdk
using /usr/lib/jvm/java-openjdk as JAVA_HOME
using 7 as CDH_VERSION
using /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait as CONF_DIR
using  as SECURE_USER
using  as SECURE_GROUP
CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
CMF_CONF_DIR=
unlimited
Safe mode is ON

 Stderr:

[15/Jul/2025 22:51:23 -0500] 3566604 MainThread redactor     INFO     Started launcher: /opt/cloudera/cm-agent/service/hdfs/hdfs.sh nnRpcWait hdfs://dmidlkprdls01.svr.luc.edu:8020
[15/Jul/2025 22:51:23 -0500] 3566604 MainThread redactor     INFO     Re-exec watcher: /opt/cloudera/cm-agent/bin/cm proc_watcher 3566630
[15/Jul/2025 22:51:23 -0500] 3566631 MainThread redactor     INFO     Re-exec redactor: /opt/cloudera/cm-agent/bin/cm redactor --fds 3 5
[15/Jul/2025 22:51:23 -0500] 3566631 MainThread redactor     INFO     Started redactor
Tue Jul 15 10:51:23 PM CDT 2025
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in `seq 1 $COUNT`
+ SCRIPT=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-7.3.1-1.cdh7.3.1.p0.60371244
+ . /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-7.3.1-1.cdh7.3.1.p0.60371244
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export CDH_ICEBERG_REPLICATION_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/iceberg-replication
++ CDH_ICEBERG_REPLICATION_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/iceberg-replication
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ export CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-ozone
++ CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-ozone
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase
++ export CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_filesystem
++ CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_filesystem
++ export CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_connectors
++ CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_connectors
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zookeeper
++ export CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zeppelin
++ CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zeppelin
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue
++ export HUE_QP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue-query-processor
++ HUE_QP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue-query-processor
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive-hcatalog
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/sentry
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark
++ export CDH_SPARK3_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark3
++ CDH_SPARK3_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark3
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/avro
++ export CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kafka
++ CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kafka
++ export CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/schemaregistry
++ CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/schemaregistry
++ export CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager
++ CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager
++ export CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager_ui
++ CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager_ui
++ export CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_replication_manager
++ CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_replication_manager
++ export CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/cruise_control
++ CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/cruise_control
++ export CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/knox
++ CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/knox
++ export CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kudu
++ CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kudu
++ export CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-admin
++ CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-admin
++ export CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-tagsync
++ CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-tagsync
++ export CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-usersync
++ CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-usersync
++ export CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kms
++ CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kms
++ export CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-raz
++ CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-raz
++ export CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-rms
++ CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-rms
++ export CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/atlas
++ CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/atlas
++ export CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/tez
++ CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/tez
++ export CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix
++ CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix
++ export CDH_PHOENIX_QUERYSERVER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix_queryserver
++ CDH_PHOENIX_QUERYSERVER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix_queryserver
++ export DAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/data_analytics_studio
++ DAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/data_analytics_studio
++ export QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/queuemanager
++ QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/queuemanager
++ export CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hbase-plugin
++ CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hbase-plugin
++ export CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hive-plugin
++ CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hive-plugin
++ export CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-atlas-plugin
++ CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-atlas-plugin
++ export CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-solr-plugin
++ CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-solr-plugin
++ export CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hdfs-plugin
++ CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hdfs-plugin
++ export CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-knox-plugin
++ CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-knox-plugin
++ export CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-yarn-plugin
++ CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-yarn-plugin
++ export CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-ozone-plugin
++ CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-ozone-plugin
++ export CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kafka-plugin
++ CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kafka-plugin
++ export CDH_PROFILER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profileradmin
++ CDH_PROFILER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profileradmin
++ export CDH_PROFILER_METRICS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profilermetrics
++ CDH_PROFILER_METRICS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profilermetrics
++ export CDH_DATA_DISCOVERY_SERVICE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/data-discovery-service
++ CDH_DATA_DISCOVERY_SERVICE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/data-discovery-service
++ export CDH_PROFILER_SCHEDULER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_scheduler
++ CDH_PROFILER_SCHEDULER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_scheduler
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in "${JSVC_HOME}" "${JSVC_HOME}/.." "/usr/lib/bigtop-utils" "/usr/libexec"
+ '[' -e /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ OPENJAVA17_HOME_CANDIDATES=('/usr/lib/jvm/java-17' '/usr/lib/jvm/jdk-17' '/usr/lib/jvm/jdk1.17' '/usr/lib/jvm/zulu-17' '/usr/lib/jvm/zulu17' '/usr/lib64/jvm/java-17' '/usr/lib64/jvm/jdk1.17')
++ JAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/jdk-11' '/usr/lib/jvm/java-11-oracle')
++ OPENJAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/java-11' '/usr/lib/jvm/jdk-11' '/usr/lib64/jvm/jdk-11' '/usr/lib/jvm/zulu-11' '/usr/lib/jvm/zulu11' '/usr/lib/jvm/java-11-zulu-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jdk8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8' '/usr/lib/jvm/java-8-openjdk' '/usr/lib64/jvm/java-1.8.0-openjdk' '/usr/lib64/jvm/java-8-openjdk' '/usr/lib/jvm/zulu-8' '/usr/lib/jvm/zulu8' '/usr/lib/jvm/java-8-zulu-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${OPENJAVA17_HOME_CANDIDATES[@]} ${JAVA11_HOME_CANDIDATES[@]} ${OPENJAVA11_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk1.17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/java-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/jdk1.17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11-zulu-openjdk*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk1.8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jre1.8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/j2sdk1.8-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/j2sdk1.8-oracle/jre*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-8-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/Library/Java/Home*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/default*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/default-java*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd /usr/lib/jvm/java-openjdk
++ for candidate in `ls -rvd ${candidate_regex}* 2>/dev/null`
++ '[' -e /usr/lib/jvm/java-openjdk/bin/java ']'
++ export JAVA_HOME=/usr/lib/jvm/java-openjdk
++ JAVA_HOME=/usr/lib/jvm/java-openjdk
++ break 2
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ '[' 8 -lt 8 ']'
+ verify_java_home
+ '[' -z /usr/lib/jvm/java-openjdk ']'
+ echo JAVA_HOME=/usr/lib/jvm/java-openjdk
+ . /opt/cloudera/cm-agent/service/common/cdh-default-hadoop
++ [[ -z 7 ]]
++ '[' 7 = 3 ']'
++ '[' 7 = -3 ']'
++ '[' 7 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ HADOOP_PREFIX=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
++ HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
++ export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ '[' 7 = 4 ']'
++ '[' 7 -ge 5 ']'
++ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
+ export HADOOP_OPTS=
+ HADOOP_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HDFS_ZKFC_OPTS=
+ HDFS_ZKFC_OPTS=
++ replace_pid -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ echo -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ sed 's#{{PID}}#3566630#g'
+ export 'HADOOP_NAMENODE_OPTS=-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ get_jdk11plus_fips_java_opts
+ export CLDR_JDK11PLUS_FIPS_JAVA_ARGS=
+ CLDR_JDK11PLUS_FIPS_JAVA_ARGS=
+ get_generic_java_opts
+ jmx_exporter_option=
++ find /opt/cloudera/cm/lib -name 'jmx_prometheus_javaagent-*.jar'
++ tail -n 1
+ jmx_exporter_jar=/opt/cloudera/cm/lib/jmx_prometheus_javaagent-0.20.0.jar
+ '[' -n '' -a -n /opt/cloudera/cm/lib/jmx_prometheus_javaagent-0.20.0.jar -a True '!=' True ']'
+ export 'GENERIC_JAVA_OPTS= -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ GENERIC_JAVA_OPTS=' -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_DATANODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_SECONDARYNAMENODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NFS3_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_JOURNALNODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ get_additional_jvm_args
+ JAVA17_ADDITIONAL_JVM_ARGS='--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports=java.base/sun.net.dns=ALL-UNNAMED --add-exports=java.base/sun.net.util=ALL-UNNAMED'
+ set_additional_jvm_args_based_on_java_version
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ ADDITIONAL_JVM_ARGS=
+ case $JAVA_MAJOR in
+ ADDITIONAL_JVM_ARGS=
+ HADOOP_OPTS=' '
+ HDFS_ZKFC_OPTS=' '
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_DATANODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_SECONDARYNAMENODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_NFS3_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_JOURNALNODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ get_gc_args
++ echo /var/log/hadoop-hdfs
+ GC_LOG_DIR=/var/log/hadoop-hdfs
++ date +%Y-%m-%d_%H-%M-%S
+ GC_DATE=2025-07-15_22-51-24
+ JAVA8_VERBOSE_GC_VAR='-Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps'
+ JAVA8_GC_LOG_ROTATION_ARGS='-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ JAVA8_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ JAVA11_VERBOSE_GC_VAR=-Xlog:gc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log:uptime,level,tags:filecount=10,filesize=200M
+ JAVA11_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xlog:gc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log:uptime,level,tags:filecount=10,filesize=200M'
+ set_basic_gc_tuning_args_based_on_java_version
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ BASIC_GC_TUNING_ARGS=
+ case $JAVA_MAJOR in
+ BASIC_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ NAMENODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ DATANODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ SECONDARY_NAMENODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
++ replace_gc_args '-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_NAMENODE_OPTS=-Xms4294967296 -Xmx4294967296 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
++ replace_gc_args '  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_DATANODE_OPTS=-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_DATANODE_OPTS='-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
++ replace_gc_args '  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_SECONDARYNAMENODE_OPTS=-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_SECONDARYNAMENODE_OPTS='-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ export 'HADOOP_OPTS=  '
+ HADOOP_OPTS='  '
+ '[' -n /etc/krb5.conf ']'
+ export 'HADOOP_OPTS=-Djava.security.krb5.conf=/etc/krb5.conf   '
+ HADOOP_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf   '
+ '[' 7 -ge 4 ']'
+ HDFS_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/etc/krb5.conf   '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/etc/krb5.conf   '
+ '[' -n '' ']'
+ KEYTAB=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ echo 'using /usr/lib/jvm/java-openjdk as JAVA_HOME'
+ echo 'using 7 as CDH_VERSION'
+ echo 'using /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait as CONF_DIR'
+ echo 'using  as SECURE_USER'
+ echo 'using  as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /opt/cloudera/cm ]]
++ find /opt/cloudera/cm/lib/plugins -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ ADD_TO_CP=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ [[ -n navigator/cdh6 ]]
+ for DIR in $CM_ADD_TO_CP_DIRS
++ find /opt/cloudera/cm/lib/plugins/navigator/cdh6 -maxdepth 1 -name '*.jar'
++ tr '\n' :
find: ‘/opt/cloudera/cm/lib/plugins/navigator/cdh6’: No such file or directory
+ PLUGIN=
+ ADD_TO_CP=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ export HADOOP_CLASSPATH=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar
+ HADOOP_CLASSPATH=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar
+ set -x
+ PYTHON_COMMAND_DEFAULT_INVOKER=/opt/cloudera/cm-agent/service/../bin/python
+ PYTHON_COMMAND_INVOKER=/opt/cloudera/cm-agent/service/../bin/python
+ CM_PYTHON2_BEHAVIOR=0
+ replace_conf_dir
+ echo CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
+ echo CMF_CONF_DIR=
+ EXCLUDE_CMF_FILES=('cloudera-config.sh' 'hue.sh' 'impala.sh' 'sqoop.sh' 'supervisor.conf' 'config.zip' 'proc.json' '*.log' '*.keytab' '*jceks' '*bcfks' 'supervisor_status')
++ printf '! -name %s ' cloudera-config.sh hue.sh impala.sh sqoop.sh supervisor.conf config.zip proc.json '*.log' hdfs.keytab '*jceks' '*bcfks' supervisor_status
+ find /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait -type f '!' -path '/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/logs/*' '!' -name cloudera-config.sh '!' -name hue.sh '!' -name impala.sh '!' -name sqoop.sh '!' -name supervisor.conf '!' -name config.zip '!' -name proc.json '!' -name '*.log' '!' -name hdfs.keytab '!' -name '*jceks' '!' -name '*bcfks' '!' -name supervisor_status -exec perl -pi -e 's#\{\{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait#g' '{}' ';'
+ make_scripts_executable
+ find /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' nnRpcWait ']'
+ acquire_kerberos_tgt /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab '' true
+ '[' -z /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab ']'
+ KERBEROS_PRINCIPAL=
+ '[' '!' -z '' ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = nnRpcWait ']'
+ '[' file-operation = nnRpcWait ']'
+ '[' bootstrap = nnRpcWait ']'
+ '[' failover = nnRpcWait ']'
+ '[' transition-to-active = nnRpcWait ']'
+ '[' initializeSharedEdits = nnRpcWait ']'
+ '[' initialize-znode = nnRpcWait ']'
+ '[' format-namenode = nnRpcWait ']'
+ '[' monitor-decommission = nnRpcWait ']'
+ '[' jnSyncWait = nnRpcWait ']'
+ '[' nnRpcWait = nnRpcWait ']'
+ true
+ /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait dfsadmin -fs hdfs://dmidlkprdls01.svr.luc.edu:8020 -safemode get
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
+ '[' 0 -ne 0 ']'
+ break

 

16 REPLIES 16

avatar
Community Manager

Hi @pajoshi @vaishaakb @blizano Do you have any insights here? Thanks!


Regards,

Diana Torres,
Senior Community Moderator


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Expert Contributor

Hello @jkoral 

The log snippet you posted is not enough for us to identify the problem.

Those messages about:

Not enough replicas was chosen

They are mostly harmless, and although annoying, don't pose a threat to the process.

Is it possible for you to share the logs from both namenodes to check?

avatar
Contributor

Hi, thank you very much for your response. What logs would you need? The cloudera-scm-server and/or cloudera-scm-agent?

avatar
Contributor

In cloudera-scm-server.log, when I do a tail -f, I get a bunch of these logs. Does this mean anything?

 

2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=164 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=166 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=224 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=226 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=225 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=227 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=228 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=288 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu
2025-07-16 17:48:24,737 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:48:25,556 WARN avro-servlet-hb-processor-6:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=133 name=null host=009ec263-928b-4af1-8088-785b315f3e21/dmidlkprdls02.svr.luc.edu
2025-07-16 17:48:34,937 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:48:45,064 INFO scm-web-21021:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:48:46,516 WARN avro-servlet-hb-processor-18:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=134 name=null host=2406c3be-dd14-481f-8a19-462efa8c5f8c/dmidlkprdls03.svr.luc.edu
2025-07-16 17:48:55,306 INFO avro-servlet-hb-processor-10:com.cloudera.server.common.AgentAvroServlet: (35 skipped) AgentAvroServlet: heartbeat processing stats: average=20ms, min=11ms, max=67ms.
2025-07-16 17:48:55,352 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:48:57,424 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up
2025-07-16 17:49:05,667 INFO scm-web-20422:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:49:14,429 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Cleaned up
2025-07-16 17:49:15,826 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:49:26,104 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:49:36,228 INFO scm-web-20422:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:49:47,376 INFO scm-web-21021:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster
2025-07-16 17:49:55,368 INFO avro-servlet-hb-processor-4:com.cloudera.server.common.AgentAvroServlet: (35 skipped) AgentAvroServlet: heartbeat processing stats: average=21ms, min=11ms, max=67ms.
2025-07-16 17:49:59,425 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up

avatar
Expert Contributor

The logs from both namenode servers, to investigate why the checkpointing process is failing

avatar
Contributor

Sorry, I meant which logs do you want from both of those servers? Are there specific logs that you want? HDFS, agent, alert publisher, event server, firehose, etc?

avatar
Expert Contributor

I meant the namenode process logs.

If you didn't customize the location, it should be under /var/log/hadood-hdfs, then you will see a bunch of logs.  Get the latest one that says NAMENODE (it's in caps) and if possible share it here.  Get them from both namenode servers please.

avatar
Contributor

Thank you. I have attached both logs.

avatar
Expert Contributor

The issue seems to be in your secondary namenode:

2025-07-07 11:56:37,798 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Log not rolled. Name node is in safe mode.
The reported blocks 0 has reached the threshold 0.9990 of total blocks 0. The number of live datanodes 0 needs an additional 1 live datanodes to reach the minimum number 1.
Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:dmidlkprdls01.svr.luc.edu

It looks like the namenode can't communicate with your datanodes, hence it can't come out of safemode and crashes.

Maybe there's some network problem that don't allow the communication between those 2 roles?

Can you ping the secondary namenode from your datanodes and vice-versa?

Are the required ports open on secondary namenode and datanodes?