<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDFS Checkpoint Status Errors in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411707#M253135</link>
    <description>&lt;P&gt;I just found this information on my validations in assets. I have made these changes and will report back tomorrow if it helps.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The Checkpoint transaction-limit set to 1000000. Cloudera recommends a limit of 4,000,000. The checkpoint period is set to 3600 seconds. Cloudera recommends at least 7200 seconds (2 hours) in production clusters. Please see the following documentation for complete details: &lt;A href="https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/data-protection/topics/hdfs-configuration-properties.html" target="_blank"&gt;https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/data-protection/topics/hdfs-configuration-properties.html&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 23 Jul 2025 20:48:47 GMT</pubDate>
    <dc:creator>jkoral</dc:creator>
    <dc:date>2025-07-23T20:48:47Z</dc:date>
    <item>
      <title>HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411576#M253088</link>
      <description>&lt;P&gt;I am getting the following Checkpoint Status error and wondering if someone has an idea of how I can solve this? Brand new cluster not in use yet. Clouder Manager v7.13.1, Runtime v7.3.1 on RHEL9.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class="bold"&gt;Bad&lt;/SPAN&gt; :&lt;/SPAN&gt; &lt;SPAN class="bold"&gt;The filesystem checkpoint is 12 hour(s), 41 minute(s) old. This is 1,269.03% of the configured checkpoint period of 1 hour(s). Critical threshold: 400.00%. 7,501 transactions have occurred since the last filesystem checkpoint. This is 0.75% of the configured checkpoint transaction target of 1,000,000.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="bold"&gt;Role Log:&lt;/SPAN&gt;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.289 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSNamesystem&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Roll Edit Log from 192.168.158.2&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.289 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSEditLog&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Rolling edit logs&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.289 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSEditLog&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Ending log segment 95836, 95842&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.290 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSEditLog&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Number of transactions: 8 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 2 Number of syncs: 6 SyncTimes(ms): 5 &lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.290 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSEditLog&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Number of transactions: 8 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 2 Number of syncs: 7 SyncTimes(ms): 5 &lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.292 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FileJournalManager&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Finalizing edits file /opt/dfs/nn/current/edits_inprogress_0000000000000095836 -&amp;gt; /opt/dfs/nn/current/edits_0000000000000095836-0000000000000095843&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:31.292 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;FSEditLog&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Starting log segment at 95844&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:44.142 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;BlockPlacementPolicy&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Not enough replicas was chosen. Reason:{NO_REQUIRED_STORAGE_TYPE=1}&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;11:34:44.142 AM&lt;/TD&gt;&lt;TD&gt;INFO&lt;/TD&gt;&lt;TD&gt;BlockPlacementPolicy&lt;/TD&gt;&lt;TD&gt;&lt;PRE&gt;Not enough replicas was chosen. Reason:{NO_REQUIRED_STORAGE_TYPE=1}&lt;/PRE&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;Stdout:&lt;/P&gt;&lt;PRE&gt;Tue Jul 15 10:51:23 PM CDT 2025
JAVA_HOME=/usr/lib/jvm/java-openjdk
using /usr/lib/jvm/java-openjdk as JAVA_HOME
using 7 as CDH_VERSION
using /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait as CONF_DIR
using  as SECURE_USER
using  as SECURE_GROUP
CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
CMF_CONF_DIR=
unlimited
Safe mode is ON&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;Stderr:&lt;/P&gt;&lt;PRE&gt;[15/Jul/2025 22:51:23 -0500] 3566604 MainThread redactor     INFO     Started launcher: /opt/cloudera/cm-agent/service/hdfs/hdfs.sh nnRpcWait hdfs://dmidlkprdls01.svr.luc.edu:8020
[15/Jul/2025 22:51:23 -0500] 3566604 MainThread redactor     INFO     Re-exec watcher: /opt/cloudera/cm-agent/bin/cm proc_watcher 3566630
[15/Jul/2025 22:51:23 -0500] 3566631 MainThread redactor     INFO     Re-exec redactor: /opt/cloudera/cm-agent/bin/cm redactor --fds 3 5
[15/Jul/2025 22:51:23 -0500] 3566631 MainThread redactor     INFO     Started redactor
Tue Jul 15 10:51:23 PM CDT 2025
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in `seq 1 $COUNT`
+ SCRIPT=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-7.3.1-1.cdh7.3.1.p0.60371244
+ . /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-7.3.1-1.cdh7.3.1.p0.60371244
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export CDH_ICEBERG_REPLICATION_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/iceberg-replication
++ CDH_ICEBERG_REPLICATION_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/iceberg-replication
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ export CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-ozone
++ CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-ozone
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase
++ export CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_filesystem
++ CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_filesystem
++ export CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_connectors
++ CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase_connectors
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zookeeper
++ export CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zeppelin
++ CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/zeppelin
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue
++ export HUE_QP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue-query-processor
++ HUE_QP_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hue-query-processor
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hive-hcatalog
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/sentry
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark
++ export CDH_SPARK3_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark3
++ CDH_SPARK3_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/spark3
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/avro
++ export CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kafka
++ CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kafka
++ export CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/schemaregistry
++ CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/schemaregistry
++ export CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager
++ CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager
++ export CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager_ui
++ CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_messaging_manager_ui
++ export CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_replication_manager
++ CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/streams_replication_manager
++ export CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/cruise_control
++ CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/cruise_control
++ export CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/knox
++ CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/knox
++ export CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kudu
++ CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/kudu
++ export CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-admin
++ CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-admin
++ export CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-tagsync
++ CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-tagsync
++ export CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-usersync
++ CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-usersync
++ export CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kms
++ CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kms
++ export CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-raz
++ CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-raz
++ export CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-rms
++ CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-rms
++ export CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/atlas
++ CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/atlas
++ export CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/tez
++ CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/tez
++ export CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix
++ CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix
++ export CDH_PHOENIX_QUERYSERVER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix_queryserver
++ CDH_PHOENIX_QUERYSERVER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/phoenix_queryserver
++ export DAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/data_analytics_studio
++ DAS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/data_analytics_studio
++ export QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/queuemanager
++ QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/queuemanager
++ export CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hbase-plugin
++ CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hbase-plugin
++ export CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hive-plugin
++ CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hive-plugin
++ export CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-atlas-plugin
++ CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-atlas-plugin
++ export CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-solr-plugin
++ CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-solr-plugin
++ export CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hdfs-plugin
++ CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-hdfs-plugin
++ export CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-knox-plugin
++ CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-knox-plugin
++ export CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-yarn-plugin
++ CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-yarn-plugin
++ export CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-ozone-plugin
++ CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-ozone-plugin
++ export CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kafka-plugin
++ CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/ranger-kafka-plugin
++ export CDH_PROFILER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profileradmin
++ CDH_PROFILER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profileradmin
++ export CDH_PROFILER_METRICS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profilermetrics
++ CDH_PROFILER_METRICS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/profilermetrics
++ export CDH_DATA_DISCOVERY_SERVICE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/data-discovery-service
++ CDH_DATA_DISCOVERY_SERVICE_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_manager/data-discovery-service
++ export CDH_PROFILER_SCHEDULER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_scheduler
++ CDH_PROFILER_SCHEDULER_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/profiler_scheduler
+ locate_cdh_java_home
+ '[' -z '' ']'
+ '[' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils ']'
+ local BIGTOP_DETECT_JAVAHOME=
+ for candidate in "${JSVC_HOME}" "${JSVC_HOME}/.." "/usr/lib/bigtop-utils" "/usr/libexec"
+ '[' -e /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome ']'
+ BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome
+ break
+ '[' -z /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome ']'
+ . /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/bigtop-utils/bigtop-detect-javahome
++ BIGTOP_DEFAULTS_DIR=/etc/default
++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']'
++ OPENJAVA17_HOME_CANDIDATES=('/usr/lib/jvm/java-17' '/usr/lib/jvm/jdk-17' '/usr/lib/jvm/jdk1.17' '/usr/lib/jvm/zulu-17' '/usr/lib/jvm/zulu17' '/usr/lib64/jvm/java-17' '/usr/lib64/jvm/jdk1.17')
++ JAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/jdk-11' '/usr/lib/jvm/java-11-oracle')
++ OPENJAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/java-11' '/usr/lib/jvm/jdk-11' '/usr/lib64/jvm/jdk-11' '/usr/lib/jvm/zulu-11' '/usr/lib/jvm/zulu11' '/usr/lib/jvm/java-11-zulu-openjdk')
++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jdk8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8' '/usr/lib/jvm/java-8-openjdk' '/usr/lib64/jvm/java-1.8.0-openjdk' '/usr/lib64/jvm/java-8-openjdk' '/usr/lib/jvm/zulu-8' '/usr/lib/jvm/zulu8' '/usr/lib/jvm/java-8-zulu-openjdk')
++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
++ case ${BIGTOP_JAVA_MAJOR} in
++ JAVA_HOME_CANDIDATES=(${OPENJAVA17_HOME_CANDIDATES[@]} ${JAVA11_HOME_CANDIDATES[@]} ${OPENJAVA11_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]})
++ '[' -z '' ']'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk1.17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/java-17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/jdk1.17*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib64/jvm/jdk-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu-11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/zulu11*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-11-zulu-openjdk*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk1.8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jdk8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/jre1.8*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/j2sdk1.8-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/j2sdk1.8-oracle/jre*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/java-8-oracle*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/Library/Java/Home*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/java/default*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd '/usr/lib/jvm/default-java*'
++ for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}
+++ ls -rvd /usr/lib/jvm/java-openjdk
++ for candidate in `ls -rvd ${candidate_regex}* 2&amp;gt;/dev/null`
++ '[' -e /usr/lib/jvm/java-openjdk/bin/java ']'
++ export JAVA_HOME=/usr/lib/jvm/java-openjdk
++ JAVA_HOME=/usr/lib/jvm/java-openjdk
++ break 2
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ '[' 8 -lt 8 ']'
+ verify_java_home
+ '[' -z /usr/lib/jvm/java-openjdk ']'
+ echo JAVA_HOME=/usr/lib/jvm/java-openjdk
+ . /opt/cloudera/cm-agent/service/common/cdh-default-hadoop
++ [[ -z 7 ]]
++ '[' 7 = 3 ']'
++ '[' 7 = -3 ']'
++ '[' 7 -ge 4 ']'
++ export HADOOP_HOME_WARN_SUPPRESS=true
++ HADOOP_HOME_WARN_SUPPRESS=true
++ export HADOOP_PREFIX=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ HADOOP_PREFIX=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/libexec
++ HADOOP_LIBEXEC_DIR=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop/libexec
++ export HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
++ HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
++ export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop
++ export HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ HADOOP_HDFS_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs
++ export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-mapreduce
++ '[' 7 = 4 ']'
++ '[' 7 -ge 5 ']'
++ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
++ HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-yarn
+ export HADOOP_OPTS=
+ HADOOP_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HDFS_ZKFC_OPTS=
+ HDFS_ZKFC_OPTS=
++ replace_pid -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ echo -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ sed 's#{{PID}}#3566630#g'
+ export 'HADOOP_NAMENODE_OPTS=-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_DATANODE_OPTS=
+ HADOOP_DATANODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_SECONDARYNAMENODE_OPTS=
+ HADOOP_SECONDARYNAMENODE_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_NFS3_OPTS=
+ HADOOP_NFS3_OPTS=
++ replace_pid
++ echo
++ sed 's#{{PID}}#3566630#g'
+ export HADOOP_JOURNALNODE_OPTS=
+ HADOOP_JOURNALNODE_OPTS=
+ get_jdk11plus_fips_java_opts
+ export CLDR_JDK11PLUS_FIPS_JAVA_ARGS=
+ CLDR_JDK11PLUS_FIPS_JAVA_ARGS=
+ get_generic_java_opts
+ jmx_exporter_option=
++ find /opt/cloudera/cm/lib -name 'jmx_prometheus_javaagent-*.jar'
++ tail -n 1
+ jmx_exporter_jar=/opt/cloudera/cm/lib/jmx_prometheus_javaagent-0.20.0.jar
+ '[' -n '' -a -n /opt/cloudera/cm/lib/jmx_prometheus_javaagent-0.20.0.jar -a True '!=' True ']'
+ export 'GENERIC_JAVA_OPTS= -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ GENERIC_JAVA_OPTS=' -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_DATANODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_SECONDARYNAMENODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NFS3_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_JOURNALNODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ get_additional_jvm_args
+ JAVA17_ADDITIONAL_JVM_ARGS='--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-exports=java.base/sun.net.dns=ALL-UNNAMED --add-exports=java.base/sun.net.util=ALL-UNNAMED'
+ set_additional_jvm_args_based_on_java_version
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ ADDITIONAL_JVM_ARGS=
+ case $JAVA_MAJOR in
+ ADDITIONAL_JVM_ARGS=
+ HADOOP_OPTS=' '
+ HDFS_ZKFC_OPTS=' '
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_DATANODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_SECONDARYNAMENODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_NFS3_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ HADOOP_JOURNALNODE_OPTS='  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 '
+ get_gc_args
++ echo /var/log/hadoop-hdfs
+ GC_LOG_DIR=/var/log/hadoop-hdfs
++ date +%Y-%m-%d_%H-%M-%S
+ GC_DATE=2025-07-15_22-51-24
+ JAVA8_VERBOSE_GC_VAR='-Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps'
+ JAVA8_GC_LOG_ROTATION_ARGS='-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ JAVA8_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ JAVA11_VERBOSE_GC_VAR=-Xlog:gc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log:uptime,level,tags:filecount=10,filesize=200M
+ JAVA11_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xlog:gc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log:uptime,level,tags:filecount=10,filesize=200M'
+ set_basic_gc_tuning_args_based_on_java_version
+ get_java_major_version JAVA_MAJOR
+ '[' -z /usr/lib/jvm/java-openjdk/bin/java ']'
++ /usr/lib/jvm/java-openjdk/bin/java -version
+ local 'VERSION_STRING=openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode)'
+ local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+'
+ [[ openjdk version "1.8.0_432"
OpenJDK Runtime Environment (build 1.8.0_432-b06)
OpenJDK 64-Bit Server VM (build 25.432-b06, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]]
+ eval JAVA_MAJOR=8
++ JAVA_MAJOR=8
+ BASIC_GC_TUNING_ARGS=
+ case $JAVA_MAJOR in
+ BASIC_GC_TUNING_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ NAMENODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ DATANODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ SECONDARY_NAMENODE_GC_ARGS='-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
+ [[ ! -z '' ]]
++ replace_gc_args '-Xms4294967296 -Xmx4294967296 {{JAVA_GC_ARGS}} -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Xms4294967296 -Xmx4294967296 '{{JAVA_GC_ARGS}}' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_NAMENODE_OPTS=-Xms4294967296 -Xmx4294967296 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_NAMENODE_OPTS='-Xms4294967296 -Xmx4294967296 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
++ replace_gc_args '  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_DATANODE_OPTS=-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_DATANODE_OPTS='-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
++ replace_gc_args '  -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2 ' '-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M'
++ echo -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2
++ sed 's#{{JAVA_GC_ARGS}}#-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xloggc:/var/log/hadoop-hdfs/gc-2025-07-15_22-51-24.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=200M#g'
+ export 'HADOOP_SECONDARYNAMENODE_OPTS=-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ HADOOP_SECONDARYNAMENODE_OPTS='-Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dcom.sun.management.jmxremote.ssl.enabled.protocols=TLSv1.2'
+ export 'HADOOP_OPTS=  '
+ HADOOP_OPTS='  '
+ '[' -n /etc/krb5.conf ']'
+ export 'HADOOP_OPTS=-Djava.security.krb5.conf=/etc/krb5.conf   '
+ HADOOP_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf   '
+ '[' 7 -ge 4 ']'
+ HDFS_BIN=/opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs/bin/hdfs
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/etc/krb5.conf   '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true -Djava.security.krb5.conf=/etc/krb5.conf   '
+ '[' -n '' ']'
+ KEYTAB=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ echo 'using /usr/lib/jvm/java-openjdk as JAVA_HOME'
+ echo 'using 7 as CDH_VERSION'
+ echo 'using /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait as CONF_DIR'
+ echo 'using  as SECURE_USER'
+ echo 'using  as SECURE_GROUP'
+ set_hadoop_classpath
+ set_classpath_in_var HADOOP_CLASSPATH
+ '[' -z HADOOP_CLASSPATH ']'
+ [[ -n /opt/cloudera/cm ]]
++ find /opt/cloudera/cm/lib/plugins -maxdepth 1 -name '*.jar'
++ tr '\n' :
+ ADD_TO_CP=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ [[ -n navigator/cdh6 ]]
+ for DIR in $CM_ADD_TO_CP_DIRS
++ find /opt/cloudera/cm/lib/plugins/navigator/cdh6 -maxdepth 1 -name '*.jar'
++ tr '\n' :
find: ‘/opt/cloudera/cm/lib/plugins/navigator/cdh6’: No such file or directory
+ PLUGIN=
+ ADD_TO_CP=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ NEW_VALUE=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar:
+ export HADOOP_CLASSPATH=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar
+ HADOOP_CLASSPATH=/opt/cloudera/cm/lib/plugins/event-publish-7.13.1-shaded.jar:/opt/cloudera/cm/lib/plugins/tt-instrumentation-7.13.1.jar
+ set -x
+ PYTHON_COMMAND_DEFAULT_INVOKER=/opt/cloudera/cm-agent/service/../bin/python
+ PYTHON_COMMAND_INVOKER=/opt/cloudera/cm-agent/service/../bin/python
+ CM_PYTHON2_BEHAVIOR=0
+ replace_conf_dir
+ echo CONF_DIR=/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait
+ echo CMF_CONF_DIR=
+ EXCLUDE_CMF_FILES=('cloudera-config.sh' 'hue.sh' 'impala.sh' 'sqoop.sh' 'supervisor.conf' 'config.zip' 'proc.json' '*.log' '*.keytab' '*jceks' '*bcfks' 'supervisor_status')
++ printf '! -name %s ' cloudera-config.sh hue.sh impala.sh sqoop.sh supervisor.conf config.zip proc.json '*.log' hdfs.keytab '*jceks' '*bcfks' supervisor_status
+ find /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait -type f '!' -path '/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/logs/*' '!' -name cloudera-config.sh '!' -name hue.sh '!' -name impala.sh '!' -name sqoop.sh '!' -name supervisor.conf '!' -name config.zip '!' -name proc.json '!' -name '*.log' '!' -name hdfs.keytab '!' -name '*jceks' '!' -name '*bcfks' '!' -name supervisor_status -exec perl -pi -e 's#\{\{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait#g' '{}' ';'
+ make_scripts_executable
+ find /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ '[' DATANODE_MAX_LOCKED_MEMORY '!=' '' ']'
+ ulimit -l
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ '[' mkdir '!=' nnRpcWait ']'
+ acquire_kerberos_tgt /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab '' true
+ '[' -z /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait/hdfs.keytab ']'
+ KERBEROS_PRINCIPAL=
+ '[' '!' -z '' ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = nnRpcWait ']'
+ '[' file-operation = nnRpcWait ']'
+ '[' bootstrap = nnRpcWait ']'
+ '[' failover = nnRpcWait ']'
+ '[' transition-to-active = nnRpcWait ']'
+ '[' initializeSharedEdits = nnRpcWait ']'
+ '[' initialize-znode = nnRpcWait ']'
+ '[' format-namenode = nnRpcWait ']'
+ '[' monitor-decommission = nnRpcWait ']'
+ '[' jnSyncWait = nnRpcWait ']'
+ '[' nnRpcWait = nnRpcWait ']'
+ true
+ /opt/cloudera/parcels/CDH-7.3.1-1.cdh7.3.1.p0.60371244/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/302-hdfs-NAMENODE-nnRpcWait dfsadmin -fs hdfs://dmidlkprdls01.svr.luc.edu:8020 -safemode get
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
+ '[' 0 -ne 0 ']'
+ break&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 16:42:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411576#M253088</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-16T16:42:53Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411581#M253089</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/47030"&gt;@pajoshi&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/29989"&gt;@vaishaakb&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/80648"&gt;@blizano&lt;/a&gt;&amp;nbsp;Do you have any insights here? Thanks!&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 17:08:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411581#M253089</guid>
      <dc:creator>DianaTorres</dc:creator>
      <dc:date>2025-07-16T17:08:41Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411582#M253090</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/128383"&gt;@jkoral&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The log snippet you posted is not enough for us to identify the problem.&lt;/P&gt;&lt;P&gt;Those messages about:&lt;/P&gt;&lt;PRE&gt;Not enough replicas was chosen&lt;/PRE&gt;&lt;P&gt;They are mostly harmless, and although annoying, don't pose a threat to the process.&lt;/P&gt;&lt;P&gt;Is it possible for you to share the logs from both namenodes to check?&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 17:13:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411582#M253090</guid>
      <dc:creator>jromero</dc:creator>
      <dc:date>2025-07-16T17:13:55Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411584#M253091</link>
      <description>&lt;P&gt;Hi, thank you very much for your response. What logs would you need? The cloudera-scm-server and/or cloudera-scm-agent?&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 17:19:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411584#M253091</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-16T17:19:58Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411600#M253093</link>
      <description>&lt;P&gt;In cloudera-scm-server.log, when I do a tail -f, I get a bunch of these logs. Does this mean anything?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=164 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=166 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=224 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=226 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=225 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=227 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=228 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:22,763 WARN avro-servlet-hb-processor-24:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=288 name=null host=45264634-8596-4805-b797-998d053db296/dmidlkprdls01.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:24,737 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:48:25,556 WARN avro-servlet-hb-processor-6:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=133 name=null host=009ec263-928b-4af1-8088-785b315f3e21/dmidlkprdls02.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:34,937 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:48:45,064 INFO scm-web-21021:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:48:46,516 WARN avro-servlet-hb-processor-18:com.cloudera.server.cmf.AgentProtocolImpl: (119 skipped) Received Process Heartbeat for unknown (or duplicate) process. Ignoring. This is expected to happen once after old process eviction or process deletion (as happens in restarts). id=134 name=null host=2406c3be-dd14-481f-8a19-462efa8c5f8c/dmidlkprdls03.svr.luc.edu&lt;BR /&gt;2025-07-16 17:48:55,306 INFO avro-servlet-hb-processor-10:com.cloudera.server.common.AgentAvroServlet: (35 skipped) AgentAvroServlet: heartbeat processing stats: average=20ms, min=11ms, max=67ms.&lt;BR /&gt;2025-07-16 17:48:55,352 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:48:57,424 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up&lt;BR /&gt;2025-07-16 17:49:05,667 INFO scm-web-20422:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:49:14,429 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Cleaned up&lt;BR /&gt;2025-07-16 17:49:15,826 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:49:26,104 INFO scm-web-20423:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:49:36,228 INFO scm-web-20422:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:49:47,376 INFO scm-web-21021:com.cloudera.cmf.cluster.AbstractParallelClusterServiceCommand: Cluster Start command with purpose START found all the services already in started state, no further action to perform on cluster DAMICluster&lt;BR /&gt;2025-07-16 17:49:55,368 INFO avro-servlet-hb-processor-4:com.cloudera.server.common.AgentAvroServlet: (35 skipped) AgentAvroServlet: heartbeat processing stats: average=21ms, min=11ms, max=67ms.&lt;BR /&gt;2025-07-16 17:49:59,425 INFO pool-10-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jul 2025 22:51:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411600#M253093</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-16T22:51:29Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411627#M253097</link>
      <description>&lt;P&gt;The logs from both namenode servers, to investigate why the checkpointing process is failing&lt;/P&gt;</description>
      <pubDate>Thu, 17 Jul 2025 16:49:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411627#M253097</guid>
      <dc:creator>jromero</dc:creator>
      <dc:date>2025-07-17T16:49:57Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411632#M253101</link>
      <description>&lt;P&gt;Sorry, I meant which logs do you want from both of those servers? Are there specific logs that you want? HDFS, agent, alert publisher, event server, firehose, etc?&lt;/P&gt;</description>
      <pubDate>Thu, 17 Jul 2025 19:55:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411632#M253101</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-17T19:55:18Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411638#M253105</link>
      <description>&lt;P&gt;I meant the namenode process logs.&lt;/P&gt;&lt;P&gt;If you didn't customize the location, it should be under /var/log/hadood-hdfs, then you will see a bunch of logs.&amp;nbsp; Get the latest one that says NAMENODE (it's in caps) and if possible share it here.&amp;nbsp; Get them from both namenode servers please.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jul 2025 15:35:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411638#M253105</guid>
      <dc:creator>jromero</dc:creator>
      <dc:date>2025-07-18T15:35:57Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411643#M253109</link>
      <description>&lt;P&gt;Thank you. I have attached both logs.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jul 2025 20:03:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411643#M253109</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-18T20:03:41Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411654#M253113</link>
      <description>&lt;P&gt;The issue seems to be in your secondary namenode:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;2025-07-07 11:56:37,798 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Log not rolled. Name node is in safe mode.
The reported blocks 0 has reached the threshold 0.9990 of total blocks 0. The number of live datanodes 0 needs an additional 1 live datanodes to reach the minimum number 1.
Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:dmidlkprdls01.svr.luc.edu&lt;/LI-CODE&gt;&lt;P&gt;It looks like the namenode can't communicate with your datanodes, hence it can't come out of safemode and crashes.&lt;/P&gt;&lt;P&gt;Maybe there's some network problem that don't allow the communication between those 2 roles?&lt;/P&gt;&lt;P&gt;Can you ping the secondary namenode from your datanodes and vice-versa?&lt;/P&gt;&lt;P&gt;Are the required ports open on secondary namenode and datanodes?&lt;/P&gt;</description>
      <pubDate>Mon, 21 Jul 2025 05:02:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411654#M253113</guid>
      <dc:creator>jromero</dc:creator>
      <dc:date>2025-07-21T05:02:51Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411676#M253117</link>
      <description>&lt;P&gt;Yes, they are having no problems communicating with each other. They all have two IPs and all of the internal communication is going over a private 192.168.x.x network. I can ping back and forth with no problem. I also turned the firewall off and that doesn't seem to be an issue either.&lt;/P&gt;</description>
      <pubDate>Mon, 21 Jul 2025 18:29:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411676#M253117</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-21T18:29:09Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411677#M253118</link>
      <description>&lt;P&gt;Can you share here any of the datanode logs, we can try to find what the problem in reaching out to the secondary namenode could be.&lt;/P&gt;</description>
      <pubDate>Mon, 21 Jul 2025 18:35:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411677#M253118</guid>
      <dc:creator>jromero</dc:creator>
      <dc:date>2025-07-21T18:35:43Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411679#M253120</link>
      <description>&lt;P&gt;The weird thing is, when I restart HDFS, it seems fine for about a day and then I get those alerts again. One thing I just did though, was I did the ssh-copy-id from the secondary name node to all the data nodes. Not sure if that will help or not though.&lt;/P&gt;</description>
      <pubDate>Mon, 21 Jul 2025 19:00:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411679#M253120</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-21T19:00:43Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411696#M253132</link>
      <description>&lt;P&gt;Here are the logs from one of the datanode servers. Thank you very much.&lt;/P&gt;</description>
      <pubDate>Tue, 22 Jul 2025 20:58:31 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411696#M253132</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-22T20:58:31Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411706#M253134</link>
      <description>&lt;P&gt;I am going to create a support ticket for this as well. I was hoping this was going to be an easy one.&lt;/P&gt;</description>
      <pubDate>Wed, 23 Jul 2025 20:05:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411706#M253134</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-23T20:05:35Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411707#M253135</link>
      <description>&lt;P&gt;I just found this information on my validations in assets. I have made these changes and will report back tomorrow if it helps.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The Checkpoint transaction-limit set to 1000000. Cloudera recommends a limit of 4,000,000. The checkpoint period is set to 3600 seconds. Cloudera recommends at least 7200 seconds (2 hours) in production clusters. Please see the following documentation for complete details: &lt;A href="https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/data-protection/topics/hdfs-configuration-properties.html" target="_blank"&gt;https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/data-protection/topics/hdfs-configuration-properties.html&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 23 Jul 2025 20:48:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411707#M253135</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-23T20:48:47Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411716#M253141</link>
      <description>&lt;P&gt;Still getting the same error.&lt;/P&gt;</description>
      <pubDate>Thu, 24 Jul 2025 19:30:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/411716#M253141</guid>
      <dc:creator>jkoral</dc:creator>
      <dc:date>2025-07-24T19:30:26Z</dc:date>
    </item>
    <item>
      <title>Re: HDFS Checkpoint Status Errors</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/413317#M254002</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/128383"&gt;@jkoral&lt;/a&gt;&amp;nbsp;FYI&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;➤ Based on the logs provided, the checkpoint failure is caused by an authentication mismatch during the FSImage upload process, further complicated by an underlying storage type configuration issue&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;➤ Primary Reason: Authentication Failure (403 Forbidden)&lt;/DIV&gt;&lt;DIV&gt;The Standby NameNode (SNN) successfully performs the checkpoint locally but fails to upload the merged fsimage back to the Active NameNode (NN).&lt;/DIV&gt;&lt;DIV&gt;-The Error: The SNN logs report: java.io.IOException: Exception during image upload: Response: 403 (Forbidden), Message: Non-exception fault: Authentication failed.&lt;/DIV&gt;&lt;DIV&gt;&lt;BR /&gt;-The Mechanism: After merging the edits, the SNN attempts to POST the new image to the NN via HTTP. The NN rejects this request because it cannot verify the identity of the SNN, which is common in new clusters where Kerberos or shared secret configurations are not fully synchronized.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;➤ Recommended Fixes&lt;/DIV&gt;&lt;DIV&gt;&lt;UL class="Apple-dash-list"&gt;&lt;LI&gt;Verify HTTP Authentication: Check the dfs.namenode.secondary.http-address and dfs.namenode.http-address settings. Ensure the hdfs user has consistent permissions across both hosts.&lt;/LI&gt;&lt;LI&gt;Check Firewall/SELinux: Since this is RHEL9, ensure that the SNN can communicate with the NN on port 9870 (or 9871 if SSL is enabled).&lt;/LI&gt;&lt;/UL&gt;&lt;/DIV&gt;</description>
      <pubDate>Sun, 11 Jan 2026 06:22:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDFS-Checkpoint-Status-Errors/m-p/413317#M254002</guid>
      <dc:creator>9een</dc:creator>
      <dc:date>2026-01-11T06:22:20Z</dc:date>
    </item>
  </channel>
</rss>

