Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Apache NiFi 1.0.0-Beta PutHDFS Giving an Error

avatar
Master Guru

2016-08-10 15:27:18,995 ERROR [Timer-Driven Process Thread-7] o.apache.nifi.processors.hadoop.PutHDFS org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=48a217b5-174b-192a-9896-4ccd470d25cc]: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /email/headers/.506170560796063 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1902) ~[na:na] at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1851) ~[na:na] at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:278) ~[nifi-hdfs-processors-1.0.0-BETA.jar:1.0.0-BETA] at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.0.0-BETA.jar:1.0.0-BETA] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1060) [nifi-framework-core-1.0.0-BETA.jar:1.0.0-BETA] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0-BETA.jar:1.0.0-BETA] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0-BETA.jar:1.0.0-BETA] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:127) [nifi-framework-core-1.0.0-BETA.jar:1.0.0-BETA] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91] Caused by: org.apache.hadoop.ipc.RemoteException: File /email/headers/.506170560796063 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422)

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi ,

where are you trying to load the files from ..?? i got the same error when i tried to upload files from my local HDF on to my VM.

i solved that issue by doing these below .

added below property to hdfs-site.xml

<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>

added a port forwarding rule in my VM to 50010

added 127.0.0.1 localhost sandbox.hortonworks.com to my local hosts file.

View solution in original post

4 REPLIES 4

avatar
Expert Contributor

Based on the error below, you should check your datanode (only 1) is running? If yes, ensure it is not listed in dfs.hosts.exclude from hdfs-site.xml and has enough space to save block files.

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91] Caused by: org.apache.hadoop.ipc.RemoteException: File /email/headers/.506170560796063 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.  

avatar
Super Collaborator

Hi ,

where are you trying to load the files from ..?? i got the same error when i tried to upload files from my local HDF on to my VM.

i solved that issue by doing these below .

added below property to hdfs-site.xml

<property><name>dfs.client.use.datanode.hostname</name><value>true</value></property>

added a port forwarding rule in my VM to 50010

added 127.0.0.1 localhost sandbox.hortonworks.com to my local hosts file.

avatar
Master Guru

local HDF to VM, let me check.

avatar
Master Guru

this was from Mac local to Sandbox.