Support Questions

Find answers, ask questions, and share your expertise

Warnings in copyFromLocal

New Contributor

Hi everybody,

 

setup (on premise):

Cloudera Runtime 7.1.7-1.cdh7.1.7.p0.15945976

Cloudera Manager 7.5.4

 

Trying to copy from local to cluster through copyFromLocal I get these exceptions. The files are copied well but always return these exceptions during execution of command:

 

[user1@machine01 ~]hdfs dfs -copyFromLocal -f files01/* /user/user1/files01
22/03/17 13:31:22 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:1001)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:813)
22/03/17 13:31:24 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:1001)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:813)
22/03/17 13:31:25 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:1001)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:813)

 

I've restarted hdfs and yarn but problem remains.

 

When I config DEBUG mode (export HADOOP_ROOT_LOGGER=DEBUG,console) this is the output:

 

22/03/17 11:09:25 DEBUG hdfs.DataStreamer: Connecting to datanode 10.0.12.5:9866
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: Send buf size 60928
22/03/17 11:09:25 DEBUG sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
22/03/17 11:09:25 DEBUG sasl.SaslDataTransferClient: SASL client doing general handshake for addr = /10.0.12.5, datanodeId = DatanodeInfoWithStorage[10.0.12.5:9866,DS-59f009fa-27bb-42b6-b5f6-eb9c81825032,DISK]
22/03/17 11:09:25 DEBUG sasl.DataTransferSaslUtil: Verifying QOP, requested QOP = [auth-conf], negotiated QOP = auth-conf
22/03/17 11:09:25 DEBUG sasl.SaslDataTransferClient: Client using cipher suite AES/CTR/NoPadding with server /10.0.12.5
22/03/17 11:09:25 DEBUG sasl.DataTransferSaslUtil: Creating IOStreamPair of CryptoInputStream and CryptoOutputStream.
22/03/17 11:09:25 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator.
22/03/17 11:09:25 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: nodes [DatanodeInfoWithStorage[10.0.12.5:9866,DS-59f009fa-27bb-42b6-b5f6-eb9c81825032,DISK], DatanodeInfoWithStorage[10.53.138.147:9866,DS-8e258c72-b3a4-4d8f-9acf-3b6907111d3f,DISK], DatanodeInfoWithStorage[10.53.138.146:9866,DS-66df5a2f-7f83-42a6-9148-db4a2878392b,DISK]] storageTypes [DISK, DISK, DISK] storageIDs [DS-59f009fa-27bb-42b6-b5f6-eb9c81825032, DS-8e258c72-b3a4-4d8f-9acf-3b6907111d3f, DS-66df5a2f-7f83-42a6-9148-db4a2878392b]
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: blk_1077805458_4217312 sending packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 1582
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: stage=DATA_STREAMING, blk_1077805458_4217312
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: DFSClient seqno: 0 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 691728 flag: 0 flag: 0 flag: 0
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: blk_1077805458_4217312 sending packet seqno: 1 offsetInBlock: 1582 lastPacketInBlock: true lastByteOffsetInBlock: 1582
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: DFSClient seqno: 1 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 2126389 flag: 0 flag: 0 flag: 0
22/03/17 11:09:25 DEBUG hdfs.DataStreamer: Closing old block BP-1293499530-10.53.138.144-1637067060765:blk_1077805458_4217312
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203296 org.apache.hadoop.hdfs.protocol.ClientProtocol.complete
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net got value #203296
22/03/17 11:09:25 DEBUG ipc.ProtobufRpcEngine: Call: complete took 2ms
22/03/17 11:09:25 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:1001)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:640)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:813)
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203297 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net got value #203297
22/03/17 11:09:25 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
22/03/17 11:09:25 DEBUG hdfs.DFSOutputStream: Closing an already closed stream. [Stream:true, streamer:true]
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203298 org.apache.hadoop.hdfs.protocol.ClientProtocol.delete
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net got value #203298
22/03/17 11:09:25 DEBUG ipc.ProtobufRpcEngine: Call: delete took 3ms
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203299 org.apache.hadoop.hdfs.protocol.ClientProtocol.rename
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net got value #203299
22/03/17 11:09:25 DEBUG ipc.ProtobufRpcEngine: Call: rename took 2ms
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203300 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net got value #203300
22/03/17 11:09:25 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 0ms
22/03/17 11:09:25 DEBUG ipc.Client: IPC Client (355115154) connection to host12.2subdomain.subdomain.domain.net/10.0.12.8:8020 from user01@realm.subdomain.domain.net sending #203301 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo

 

 

Could be something related with network? Any idea?

 

Thanks.

 

 

0 REPLIES 0
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.