Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Clusterdock: best methods

Highlighted

Clusterdock: best methods

New Contributor

I recently tried to deploy clusterdock on a centos 7 VM. It took forever to configure and I still could not fully get it working, but I was very close. HDFS namenode was not able to find my datanodes and the canary would fail. nothing I tried (to open up required ports and such) would fix this problem. I wanted to ask, is there a better way for me to set this up? I was running the centos VM on my mac with 48 gb of ram. I also have a windows 10 computer, should I just deploy the cluster on docker on that machine, and skip the virtual machine within virtual machine headache? 

2 REPLIES 2

Re: Clusterdock: best methods

New Contributor
I cannot deploy on macOS, because of routing issues inherent in the OS. or is there a way?

Re: Clusterdock: best methods

New Contributor
my errors: Jun 12, 6:00:54.815 PM WARN org.apache.hadoop.net.NetworkTopology
Failed to find datanode (scope="" excludedScope="/default").
Jun 12, 6:00:54.815 PM WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
Jun 12, 6:00:54.815 PM WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy
Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
Jun 12, 6:00:54.815 PM WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
IPC Server handler 16 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.127.2:50326 Call#19 Retry#0
java.io.IOException: File /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2017_06_12-18_01_01 could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1610)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3315)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:679)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:489)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
Don't have an account?
Coming from Hortonworks? Activate your account here