<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300241#M220101</link>
    <description>&lt;P&gt;In fact, I can't restart the datanode from the Ambari UI, but I can restart it by executing the following command directly on the server where the datanode should run&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;/var/lib/ambari-agent/ambari-sudo.sh -H -E /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf --daemon start datanode&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Therefore I think that the operating system limit&amp;nbsp;&lt;SPAN&gt;max locked memory is right set on the server&amp;nbsp;where the datanode should run&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 22 Jul 2020 12:35:15 GMT</pubDate>
    <dc:creator>Stephbat</dc:creator>
    <dc:date>2020-07-22T12:35:15Z</dc:date>
    <item>
      <title>Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300121#M220046</link>
      <description>&lt;P&gt;I'm facing an issue during the upgrade of HDP 3.1.0.0-78 to 3.1.4.0-315 on Ubuntu 18&lt;/P&gt;&lt;P&gt;The upgrade process is not able to restart the datanodes.&lt;/P&gt;&lt;P&gt;I get the error&amp;nbsp;java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) of 2147483648 bytes is more than the datanode's available RLIMIT_MEMLOCK ulimit of 16777216 bytes.&lt;/P&gt;&lt;P&gt;I don't understand why this error happens. The datanodes were well started before the starting of the upgrade process and the system setting&amp;nbsp;RLIMIT_MEMLOCK hasn't been changed.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks in advance for your help&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2020 12:42:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300121#M220046</guid>
      <dc:creator>Stephbat</dc:creator>
      <dc:date>2020-07-21T12:42:22Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300140#M220050</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/74359"&gt;@Stephbat&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please can you check those 2 values &lt;FONT color="#FF6600"&gt;dfs.datanode.max.locked.memory&lt;/FONT&gt; and &lt;FONT color="#FF6600"&gt;ulimit&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The &lt;FONT color="#FF6600"&gt;dfs.datanode.max.locked.memory&lt;/FONT&gt; determines the maximum amount of memory a DataNode will use for caching. The "locked-in-memory size" corresponds to ulimit (ulimit -l) of the DataNode user that needs to be increased to match this parameter.&lt;BR /&gt;The current &lt;FONT color="#FF6600"&gt;dfs.datanode.max.locked.memory&lt;/FONT&gt; is &lt;FONT color="#FF6600"&gt;2&lt;/FONT&gt; GB and while the &lt;FONT color="#FF6600"&gt;RLIMIT_MEMLOCK&lt;/FONT&gt; is &lt;FONT color="#FF6600"&gt;16&lt;/FONT&gt; MB&lt;/P&gt;&lt;P&gt;If you get the error “&lt;FONT color="#FF0000"&gt;Cannot start datanode because the configured max locked memory size… is more than the datanode’s available RLIMIT_MEMLOCK ulimit,&lt;/FONT&gt;” that means that the operating system is imposing a lower limit on the amount of memory that you can lock than what you have configured. To fix this, you must adjust the&lt;FONT color="#FF0000"&gt; ulimit -l&lt;/FONT&gt; value that the &lt;FONT color="#FF0000"&gt;DataNode&lt;/FONT&gt; runs with.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Usually, this value is configured in &lt;FONT color="#FF0000"&gt;/etc/security/limits.conf.&lt;/FONT&gt; However, it will vary depending on what operating system and distribution you are using please adjust the values accordingly remember that you will need space in memory for other things as well, such as the DataNode and application JVM heaps and the operating system page cache.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Once adjust the datanode should start as a charm &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope that helps&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2020 13:50:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300140#M220050</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2020-07-21T13:50:05Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300147#M220054</link>
      <description>&lt;P&gt;Thanks for this reply,&lt;/P&gt;&lt;P&gt;but I don't understand why the datanode started correctly before the upgrade process and failed during the upgrade process without any change on the OS limit RLIMIT_MEMLOCK&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2020 14:39:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300147#M220054</guid>
      <dc:creator>Stephbat</dc:creator>
      <dc:date>2020-07-21T14:39:52Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300149#M220056</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/74359"&gt;@Stephbat&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Those are internals to Cloudera and that confirms myth &lt;FONT color="#FF6600"&gt;migration/upgrades&lt;/FONT&gt; are never smooth we still need humans &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; Please do those changes and let me know if your datanodes fires up correctly.&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2020 14:55:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300149#M220056</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2020-07-21T14:55:39Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300199#M220084</link>
      <description>&lt;P&gt;In fact, I can’t restart the datanode after the upgrade of Ambari from 2.7.3.0 to 2.7.4.0, not during the upgrade of HDP, and while the restart works fine before the upgrade&lt;/P&gt;&lt;P&gt;Below the logs of the restart with the error : The operating system limit max locked memory is set to 2197152 kbytes and it's more than the value of the parameter&amp;nbsp;dfs.datanode.max.locked.memory (2147483648 bytes)&lt;/P&gt;&lt;LI-CODE lang="c"&gt;core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 257446
max locked memory       (kbytes, -l) 2197152
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==&amp;gt; /var/log/hadoop/hdfs/hadoop-hdfs-root-datanode-di-dbdne-fe-develophdpwkr-01.log &amp;lt;==
2020-07-22 06:42:20,156 INFO  datanode.DataNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2020-07-22 06:42:20,422 INFO  security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(1009)) - Login successful for user dn/di-dbdne-fe-develophdpwkr-01.node.fe.sd.diod.tech@DIOD.TECH using keytab file /etc/security/keytabs/dn.service.keytab
2020-07-22 06:42:20,574 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/mnt/hdd0/hadoop/hdfs/data
2020-07-22 06:42:20,581 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/mnt/hdd1/hadoop/hdfs/data
2020-07-22 06:42:20,582 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/mnt/hdd2/hadoop/hdfs/data
2020-07-22 06:42:20,582 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/mnt/hdd3/hadoop/hdfs/data
2020-07-22 06:42:20,582 INFO  checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [RAM_DISK]file:/mnt/dn-tmpfs
2020-07-22 06:42:20,656 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2020-07-22 06:42:20,911 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink.
2020-07-22 06:42:20,912 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = di-dbdne-fe-develophdpwkr-01.node.fe.sd.diod.tech, serviceName = datanode
2020-07-22 06:42:20,943 INFO  availability.MetricSinkWriteShardHostnameHashingStrategy (MetricSinkWriteShardHostnameHashingStrategy.java:findCollectorShard(42)) - Calculated collector shard di-dbdne-fe-develophdpadm-01.node.fe.sd.diod.tech based on hostname: di-dbdne-fe-develophdpwkr-01.node.fe.sd.diod.tech
2020-07-22 06:42:20,943 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(135)) - Collector Uri: http://di-dbdne-fe-develophdpadm-01.node.fe.sd.diod.tech:6188/ws/v1/timeline/metrics
2020-07-22 06:42:20,943 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(136)) - Container Metrics Uri: http://di-dbdne-fe-develophdpadm-01.node.fe.sd.diod.tech:6188/ws/v1/timeline/containermetrics
2020-07-22 06:42:20,948 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started
2020-07-22 06:42:20,988 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2020-07-22 06:42:20,989 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - DataNode metrics system started
2020-07-22 06:42:21,068 INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-07-22 06:42:21,070 INFO  datanode.BlockScanner (BlockScanner.java:&amp;lt;init&amp;gt;(184)) - Initialized block scanner with targetBytesPerSec 1048576
2020-07-22 06:42:21,073 INFO  datanode.DataNode (DataNode.java:&amp;lt;init&amp;gt;(486)) - File descriptor passing is enabled.
2020-07-22 06:42:21,074 INFO  datanode.DataNode (DataNode.java:&amp;lt;init&amp;gt;(499)) - Configured hostname is di-dbdne-fe-develophdpwkr-01.node.fe.sd.diod.tech
2020-07-22 06:42:21,074 INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-07-22 06:42:21,076 ERROR datanode.DataNode (DataNode.java:secureMain(2883)) - Exception in secureMain
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) of 2147483648 bytes is more than the datanode's available RLIMIT_MEMLOCK ulimit of 16777216 bytes.&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Jul 2020 07:18:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300199#M220084</guid>
      <dc:creator>Stephbat</dc:creator>
      <dc:date>2020-07-22T07:18:43Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300241#M220101</link>
      <description>&lt;P&gt;In fact, I can't restart the datanode from the Ambari UI, but I can restart it by executing the following command directly on the server where the datanode should run&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="c"&gt;/var/lib/ambari-agent/ambari-sudo.sh -H -E /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf --daemon start datanode&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Therefore I think that the operating system limit&amp;nbsp;&lt;SPAN&gt;max locked memory is right set on the server&amp;nbsp;where the datanode should run&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Jul 2020 12:35:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300241#M220101</guid>
      <dc:creator>Stephbat</dc:creator>
      <dc:date>2020-07-22T12:35:15Z</dc:date>
    </item>
    <item>
      <title>Re: Upgrading HDP 3.1.0 to 3.1.4 : Cannot restart datanode</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300368#M220176</link>
      <description>&lt;P&gt;I was able to restart to the datanode from the Ambari UI after a restart of the ambari-agent on the servers where the datanode run&lt;/P&gt;</description>
      <pubDate>Fri, 24 Jul 2020 06:38:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Upgrading-HDP-3-1-0-to-3-1-4-Cannot-restart-datanode/m-p/300368#M220176</guid>
      <dc:creator>Stephbat</dc:creator>
      <dc:date>2020-07-24T06:38:09Z</dc:date>
    </item>
  </channel>
</rss>

