Support Questions
Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

I am trying to run MR job which transfer data from on cluster to other on openstack cloud, but it giving unknown host error for destination cluster namenode.

Contributor

. 16/07/06 18:26:47 INFO hive.MetastoreReplicationJob: Starting job for step 1... 16/07/06 18:26:48 INFO impl.TimelineClientImpl: Timeline service address: http://hdp-2.2slave1.datametica.org:8188/ws/v1/timeline/ 16/07/06 18:26:48 INFO client.RMProxy: Connecting to ResourceManager at hdp-2.2master.datametica.org/10.200.80.34:8050 16/07/06 18:26:48 INFO input.FileInputFormat: Total input paths to process : 1 16/07/06 18:26:49 INFO mapreduce.JobSubmitter: number of splits:1 16/07/06 18:26:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467790525837_0054 16/07/06 18:26:49 INFO impl.YarnClientImpl: Submitted application application_1467790525837_0054 16/07/06 18:26:49 INFO mapreduce.Job: The url to track the job: http://hdp-2.2master.datametica.org:8088/proxy/application_1467790525837_0054/ 16/07/06 18:26:49 INFO mapreduce.Job: Running job: job_1467790525837_0054 16/07/06 18:26:54 INFO mapreduce.Job: Job job_1467790525837_0054 running in uber mode : false 16/07/06 18:26:54 INFO mapreduce.Job: map 0% reduce 0% 16/07/06 18:26:58 INFO mapreduce.Job: Task Id : attempt_1467790525837_0054_m_000000_0, Status : FAILED Error: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdp-test at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at com.airbnb.reair.common.FsUtils.equalDirs(FsUtils.java:294) at com.airbnb.reair.incremental.DirectoryCopier.equalDirs(DirectoryCopier.java:105) at com.airbnb.reair.incremental.primitives.TaskEstimator.analyzeTableSpec(TaskEstimator.java:111) at com.airbnb.reair.incremental.primitives.TaskEstimator.analyze(TaskEstimator.java:67) at com.airbnb.reair.batch.hive.TableCompareWorker.processTable(TableCompareWorker.java:136) at com.airbnb.reair.batch.hive.MetastoreReplicationJob$Stage1ProcessTableMapperWithTextInput.map(MetastoreReplicationJob.java:559) at com.airbnb.reair.batch.hive.MetastoreReplicationJob$Stage1ProcessTableMapperWithTextInput.map(MetastoreReplicationJob.java:538) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.net.UnknownHostException: hdp-test ... 27 more

1 ACCEPTED SOLUTION

Explorer

Check the /etc/hosts file. All the nodes in the cluster should have ip addresses to every other node in the cluster.

View solution in original post

2 REPLIES 2

Explorer

Check the /etc/hosts file. All the nodes in the cluster should have ip addresses to every other node in the cluster.

Contributor

It's working for me...

Thanks a lot... 🙂