Member since
08-18-2021
4
Posts
0
Kudos Received
0
Solutions
09-29-2021
08:35 AM
1. I can't open Hive WebUI. It can't connect to the Hive website. It shows " ERR_CONNECTION_REFUSED". 2. No, it always can't connect to the website. 3. There is no other message about it. 4. I have used command line, "hadoop version". There is no information about CDH version. All codes are below. https://github.com/KarenPHS/HadoopCluster
... View more
08-26-2021
02:38 PM
hive-site.xml <property>
<name>system:java.io.tmpdir</name>
<value>/usr/local/hive/tmp</value>
</property>
<property>
<name>system:user.name</name>
<value>${user.name}</value>
</property>
<property>
<name>hive.server2.webui.host</name>
<value>active-nn</value>
<description>The host address the HiveServer2 WebUI will listen on</description>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
<description>The port the HiveServer2 WebUI will listen on. This can beset to 0 or a negative integer to disable the web UI</description>
</property> core-site.xml <property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property> I can open hive but I can't open the WebUI. Above are my files. What is the wrong? (version:ubuntu 20.04, Hadoop 3.3.1 (including YARN), Open JDK 8, Hive 2.3.9)
... View more
Labels:
- Labels:
-
Apache Hive
08-21-2021
02:17 PM
core-site.xml <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/data</value>
<description>Temporary Directory.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nncluster</value>
<description>Use HDFS as file storage engine</description>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>nncluster</value>
</property>
</configuration> hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.nameservices</name>
<value>nncluster</value>
</property>
<property>
<name>dfs.ha.namenodes.nncluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nncluster.nn1</name>
<value>active-nn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nncluster.nn1</name>
<value>active-nn:9870</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nncluster.nn2</name>
<value>standby-nn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nncluster.nn2</name>
<value>standby-nn:9870</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/nncluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/data/journalnode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/data/datanode</value>
</property>
<!--
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_dsa</value>
</property>
-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.nncluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration> mapred-site.xml <?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>yarn.app.mapreduce.am.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx5324m</value>
</property>
<!--
<property>
<name>mapreduce.job.heap.memory-mb.ratio</name>
<value>0.8</value>
</property>
-->
<property>
<name>mapreduce.map.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>mapreduce.map.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx5324m</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>1331</value>
</property>
<property>
<name>mapreduce.reduce.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>mapreduce.reduce.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx5324m</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>historysever:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>historysever:19888</value>
</property>
</configuration> yarn-site.xml <?xml version="1.0"?>
<!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law
or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file. -->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.resource-types.memory-mb.increment-allocation</name>
<value>512</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rmcluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>active-rm</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>standby-rm</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>active-rm:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>standby-rm:8088</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<!-- <property> <name>yarn.nodemanager.resource.detect-hardware-capabilities</name> <value>true</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4.2</value> </property> -->
</configuration> The sequence I found from the Net. # [Zookeeper start]
docker exec -it journalnode1 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start" && docker exec -it journalnode2 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start" && docker exec -it journalnode3 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start"
sleep 5
# [Journal start]
docker exec -it journalnode1 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode" && docker exec -it journalnode2 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode" && docker exec -it journalnode3 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode"
sleep 5
# [NN format]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs namenode -format"
sleep 5
# [NN start]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start namenode"
sleep 5
# [Standby get data]
docker exec -it standby-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs namenode -bootstrapStandby"
sleep 5
# [NN zookeeper start]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs zkfc -formatZK"
sleep 5
# [All NN start]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/sbin/start-dfs.sh"
sleep 5
# [RM && HS start]
docker exec -it active-rm /bin/bash -c "/usr/local/hadoop/sbin/start-yarn.sh"
sleep 5
docker exec -it historyserver /bin/bash -c "/usr/local/hadoop/bin/mapred --daemon start historyserver"
sleep 5 Below is the error response. 2021-08-22 03:56:09,499 INFO namenode.FSNamesystem: Stopping services started for active state
2021-08-22 03:56:09,500 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-08-22 03:56:09,500 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown:
172.22.0.13:8485: Call From active-nn/172.22.0.2 to journalnode2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
172.22.0.5:8485: Call From active-nn/172.22.0.2 to journalnode1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
172.22.0.7:8485: Call From active-nn/172.22.0.2 to journalnode3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:305)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:282)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1185)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:212)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1272)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1832)
2021-08-22 03:56:09,502 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown:
172.22.0.13:8485: Call From active-nn/172.22.0.2 to journalnode2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
172.22.0.5:8485: Call From active-nn/172.22.0.2 to journalnode1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
172.22.0.7:8485: Call From active-nn/172.22.0.2 to journalnode3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2021-08-22 03:56:09,504 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at active-nn/172.22.0.2
************************************************************/ What is the sequence of startup for the first time on the docker? Because when I put NN format after NN start, there is no error. So, I wonder what the sequence is? Thank you.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Docker
08-18-2021
07:43 PM
I use docker-compose to create hadoop cluster with HA. But it always show the problem. Below are the response, set-up file, and the sequence of start-up. Thank you for any answer. 2021-08-19 09:08:15,718 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-08-19 09:08:15,718 INFO util.GSet: VM type = 64-bit
2021-08-19 09:08:15,718 INFO util.GSet: 0.029999999329447746% max memory 14.0 GB = 4.3 MB
2021-08-19 09:08:15,718 INFO util.GSet: capacity = 2^19 = 524288 entries
2021-08-19 09:08:17,009 INFO ipc.Client: Retrying connect to server: journalnode2/172.18.0.5:8485. Already tried 0 time(s); retry polic
y is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:17,009 INFO ipc.Client: Retrying connect to server: journalnode3/172.18.0.11:8485. Already tried 0 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:17,009 INFO ipc.Client: Retrying connect to server: journalnode1/172.18.0.12:8485. Already tried 0 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:18,010 INFO ipc.Client: Retrying connect to server: journalnode2/172.18.0.5:8485. Already tried 1 time(s); retry polic
y is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:18,010 INFO ipc.Client: Retrying connect to server: journalnode1/172.18.0.12:8485. Already tried 1 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:18,010 INFO ipc.Client: Retrying connect to server: journalnode3/172.18.0.11:8485. Already tried 1 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:19,011 INFO ipc.Client: Retrying connect to server: journalnode3/172.18.0.11:8485. Already tried 2 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:19,011 INFO ipc.Client: Retrying connect to server: journalnode2/172.18.0.5:8485. Already tried 2 time(s); retry polic
y is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:19,011 INFO ipc.Client: Retrying connect to server: journalnode1/172.18.0.12:8485. Already tried 2 time(s); retry poli
cy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:20,013 INFO ipc.Client: Retrying connect to server: journalnode2/172.18.0.5:8485. Already tried 3 time(s); retry polic
y is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-08-19 09:08:20,013 INFO ipc.Client: Retrying connect to server: journalnode1/172.18.0.12:8485. Already tried 3 time(s); retry poli core-site.xml <configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/data</value>
<description>Temporary Directory.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nncluster</value>
<description>Use HDFS as file storage engine</description>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>nncluster</value>
</property>
</configuration> hdfs-site.xml <configuration>
<property>
<name>dfs.nameservices</name>
<value>nncluster</value>
</property>
<property>
<name>dfs.ha.namenodes.nncluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nncluster.nn1</name>
<value>active-nn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nncluster.nn1</name>
<value>active-nn:9870</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nncluster.nn2</name>
<value>standby-nn:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nncluster.nn2</name>
<value>standby-nn:9870</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/nncluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/data/journalnode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/data/datanode</value>
</property>
<!--
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_dsa</value>
</property>
-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.nncluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration> mapred-site.xml <configuration>
<property>
<name>yarn.app.mapreduce.am.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx5324m</value>
</property>
<!--
<property>
<name>mapreduce.job.heap.memory-mb.ratio</name>
<value>0.8</value>
</property>
-->
<property>
<name>mapreduce.map.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>mapreduce.map.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx5324m</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>1331</value>
</property>
<property>
<name>mapreduce.reduce.resource.vcores</name>
<value>3</value>
</property>
<property>
<name>mapreduce.reduce.resource.memory-mb</name>
<value>6656</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx5324m</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>historysever:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>historysever:19888</value>
</property>
</configuration> yarn-site.xml <configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>3</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.resource-types.memory-mb.increment-allocation</name>
<value>512</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rmcluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>active-rm</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>standby-rm</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>active-rm:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>standby-rm:8088</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>journalnode1:2181,journalnode2:2181,journalnode3:2181</value>
</property>
<!-- <property> <name>yarn.nodemanager.resource.detect-hardware-capabilities</name> <value>true</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4.2</value> </property> -->
</configuration> the sequence of start-up # [Zookeeper start]
docker exec -it journalnode1 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start" && docker exec -it journalnode2 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start" && docker exec -it journalnode3 /bin/bash -c "/usr/local/zookeeper/bin/zkServer.sh start"
# [Journal start]
docker exec -it journalnode1 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode" && docker exec -it journalnode2 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode" && docker exec -it journalnode3 /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start journalnode"
# [NN format]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs namenode -format"
# [NN zookeeper start]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs zkfc -formatZK"
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/sbin/start-dfs.sh"
# [Standby get data]
docker exec -it standby-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs namenode -bootstrapStandby"
docker exec -it standby-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs --daemon start namenode"
# [NN && RM && HS start]
docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/sbin/start-dfs.sh"
docker exec -it active-rm /bin/bash -c "/usr/local/hadoop/sbin/start-yarn.sh"
docker exec -it historyserver /bin/bash -c "/usr/local/hadoop/bin/mapred --daemon start historyserver" The error response show when I exec NN format, docker exec -it active-nn /bin/bash -c "/usr/local/hadoop/bin/hdfs namenode -format". All these codes are in the https://github.com/KarenPHS/HadoopCluster.
... View more
Labels:
- Labels:
-
Apache Hadoop