Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to list the file and directories in remote cluster

avatar

Hi Team,

I am Unable to list the file and directories in remote cluster with the hostname of namenode, since the cluster is not kerberos enabled. Kindly helps us to fix the issue.

 

Note: same is working fine from dev - Test but not from dev to qa

 

[hdfs@ ~]$ hdfs dfs -ls hdfs://namenode/
Found 17 items
drwxr-xr-x - hdfs hdfs 0 2019-06-03 12:25 hdfs://namenode/app
drwxrwxrwt - yarn hadoop 0 2019-10-15 06:39 hdfs://namenode/app-logs
drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:13 hdfs://namenode/apps
drwxr-xr-x - yarn hadoop 0 2019-05-29 22:05 hdfs://namenode/ats
drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/atsv2
drwxrwxr-x+ - nifi hadoop 0 2019-06-03 12:00 hdfs://namenode/data
drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/hdp
drwxr-xr-x - hive hdfs 0 2019-12-31 07:24 hdfs://namenode/home
drwx------ - livy hdfs 0 2019-05-29 22:06 hdfs://namenode/livy2-recovery
drwxr-xr-x - mapred hdfs 0 2019-05-29 22:05 hdfs://namenode/mapred
drwxrwxrwx - mapred hadoop 0 2019-05-29 22:05 hdfs://namenode/mr-history
drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/ranger
drwxrwxrwx - spark hadoop 0 2020-01-14 08:03 hdfs://namenode/spark2-history
drwxrwxrwx - hdfs hdfs 0 2019-12-06 13:52 hdfs://namenode/system
drwxrwxrwx - hdfs hdfs 0 2019-09-30 05:21 hdfs://namenode/tmp
drwxrwxr-x - hdfs hdfs 0 2019-10-14 04:57 hdfs://namenode/user
drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:06 hdfs://namenode/warehouse

 

[hdfs@ ~]$ hdfs dfs -ls hdfs://namenode/
ls: DestHost:destPort namenode:8020 , LocalHost:localPort namenode/10.49.194.171:0. Failed on local exception: java.io.IOException: Connection reset by peer

 

4 REPLIES 4

avatar
Master Guru

@saivenkatg55 Can you confirm both the source and destination cluster is not Kerberized?

Please upload the full stack from the error message. 

 

Also It worth to check if the use case is actually suited for using HDFS's NFS Gateway role[1] which is designed for such remote cluster access. [1] - Adding and Configuring an NFS Gateway - https://www.cloudera.com/documentation/enterprise/5-12-x/topics/admin_hdfs_nfsgateway.html


Cheers!
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar

@GangWar both the source and destination clusters are not kerberized yet.

Please find the full stack trace.

[hdfs@w0lxthdp01 ~]$ HADOOP_ROOT_LOGGER=DEBUG,console hdfs dfs -ls hdfs://w0lxqhdp01
20/01/17 02:45:31 DEBUG util.Shell: setsid exited with exit code 0
20/01/17 02:45:31 DEBUG conf.Configuration: parsing URL jar:file:/usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar!/core-default.xml
20/01/17 02:45:31 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@66480dd7
20/01/17 02:45:31 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/3.0.1.0-187/0/core-site.xml
20/01/17 02:45:31 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@1877ab81
20/01/17 02:45:31 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
20/01/17 02:45:31 DEBUG security.Groups: Creating new Groups object
20/01/17 02:45:31 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
20/01/17 02:45:31 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
20/01/17 02:45:31 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
20/01/17 02:45:31 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
20/01/17 02:45:31 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
20/01/17 02:45:31 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
20/01/17 02:45:31 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
20/01/17 02:45:31 DEBUG security.UserGroupInformation: hadoop login
20/01/17 02:45:31 DEBUG security.UserGroupInformation: hadoop login commit
20/01/17 02:45:31 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hdfs
20/01/17 02:45:31 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hdfs" with name hdfs
20/01/17 02:45:31 DEBUG security.UserGroupInformation: User entry: "hdfs"
20/01/17 02:45:31 DEBUG security.UserGroupInformation: UGI loginUser:hdfs (auth:SIMPLE)
20/01/17 02:45:31 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
20/01/17 02:45:31 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
20/01/17 02:45:31 DEBUG fs.FileSystem: Loading filesystems
20/01/17 02:45:31 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:31 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:32 DEBUG gcs.GoogleHadoopFileSystemBase: GHFS version: 1.9.0.3.0.1.0-187
20/01/17 02:45:32 DEBUG fs.FileSystem: gs:// = class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/gcs-connector-1.9.0.3.0.1.0-187-shaded.jar
20/01/17 02:45:32 DEBUG fs.FileSystem: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-aws-3.1.1.3.0.1.0-187.jar
20/01/17 02:45:32 DEBUG fs.FileSystem: Looking for FS supporting hdfs
20/01/17 02:45:32 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
20/01/17 02:45:32 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
20/01/17 02:45:32 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = true
20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
20/01/17 02:45:32 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
20/01/17 02:45:32 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
20/01/17 02:45:32 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@51f116b8
20/01/17 02:45:32 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@29e495ff
20/01/17 02:45:32 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@bdfd7f3: starting with interruptCheckPeriodMs = 60000
20/01/17 02:45:32 DEBUG shortcircuit.DomainSocketFactory: The short-circuit local reads feature is enabled.
20/01/17 02:45:32 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
20/01/17 02:45:32 DEBUG ipc.Client: The ping interval is 60000 ms.
20/01/17 02:45:32 DEBUG ipc.Client: Connecting to w0lxqhdp01/10.49.70.13:8020
20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: starting, having connections 1
20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
20/01/17 02:45:32 DEBUG ipc.Client: closing ipc connection to w0lxqhdp01/10.49.70.13:8020: Connection reset by peer
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:554)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1802)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1167)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1063)
20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: closed
20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: stopped, remaining connections 0
20/01/17 02:45:32 DEBUG retry.RetryInvocationHandler: Exception while invoking call #0 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail.
java.io.IOException: DestHost:destPort w0lxqhdp01:8020 , LocalHost:localPort w0lxthdp01.ifc.org/10.49.194.14:0. Failed on local exception: java.io.IOException: Connection reset by peer
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1654)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:65)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:270)
at org.apache.hadoop.fs.Globber.glob(Globber.java:149)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2067)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:353)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:554)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1802)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1167)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1063)
ls: DestHost:destPort w0lxqhdp01:8020 , LocalHost:localPort w0lxthdp01.ifc.org/10.49.194.14:0. Failed on local exception: java.io.IOException: Connection reset by peer
20/01/17 02:45:32 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@29e495ff
20/01/17 02:45:32 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@29e495ff
20/01/17 02:45:32 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@29e495ff
20/01/17 02:45:32 DEBUG ipc.Client: Stopping client
20/01/17 02:45:32 DEBUG util.ShutdownHookManager: Completed shutdown in 0.003 seconds; Timeouts: 0
20/01/17 02:45:32 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.
[hdfs@w0lxthdp01 ~]$

avatar
New Contributor

What about firewalls?

#nc -v <Dest IP> 8020 

 

Thanks

Naresh.

avatar

@ramineni Please find the netcat O/P

 

[hdfs@w0lxdhdp01 ~]$ nc -v w0lxqhdp01 8020
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.49.70.13:8020.