Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Cloning a standalone Cloudera Stack using VMware getting hostname errors

Explorer

We have a working standalone Cloudera solution with Zookeeper, HDFS, Solr, Accumulo and Kafka.  When trying to clone this to another VM, we can get everything to work with the new IP, but accumulo tablets are erroring with trying to reach the original hostname. Does anyone know whether this is embedded in the HDFS, or is there a config entry we are missing. The error is below:   The original host was cafe-144, the clone was cafe-191, I edited the names, so may be missing, but both hostnames on the servers are configured as well as FQDN.

 

exception trying to assign tablet +r<< hdfs://cafe-144:8020/accumulo/tables/+r/root_tablet
	java.net.NoRouteToHostException: No Route to Host from  cafe-191/X.X.X.191 to cafe-144.:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost
		at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
		at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
		at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
		at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
		at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
		at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757)
		at org.apache.hadoop.ipc.Client.call(Client.java:1475)
		at org.apache.hadoop.ipc.Client.call(Client.java:1408)
		at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
		at com.sun.proxy.$Proxy14.getListing(Unknown Source)
		at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:559)
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
		at java.lang.reflect.Method.invoke(Method.java:498)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
		at com.sun.proxy.$Proxy15.getListing(Unknown Source)
		at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2099)
		at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2082)
		at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:701)
		at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
		at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
		at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
		at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
		at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
		at org.apache.accumulo.server.fs.VolumeManagerImpl.listStatus(VolumeManagerImpl.java:335)
		at org.apache.accumulo.tserver.Tablet.lookupDatafiles(Tablet.java:1108)
		at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1211)
		at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1067)
		at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1056)
		at org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2911)
		at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
		at java.lang.Thread.run(Thread.java:745)
	Caused by: java.net.NoRouteToHostException: No route to host
		at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
		at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
		at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
		at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
		at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
		at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
		at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
		at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
		at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524)
		at org.apache.hadoop.ipc.Client.call(Client.java:1447)
		... 27 more

 

1 REPLY 1

Explorer

Edit:

With the same hostname, the VM OVF can be moved, re-IP'd and brought up with no issues.. However the hostname must be the same.  Does anyone know a way to get around this?

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.