Support Questions

Find answers, ask questions, and share your expertise

HDFS NFS copy commands fails with cannot create regular file Input/output error

avatar
Contributor

cp /tmp/Data_src/1MB_File /tmp/tmp_mnt/NFS_DIR/1MB_File

cp: cannot create regular file ‘/tmp/tmp_mnt/NFS_DIR/1MB_File’: Input/output error

1 ACCEPTED SOLUTION

avatar
@Raghav Kumar Gautam

As from the logs you can see you need to set the dfs.namenode.accesstime.precision value to 3600000. The property dfs.access.time.precision is deprecated. Ambari sets the new one dfs.namenode.accesstime.precision to 0, which disables setting access time. So, setting the new property should resolve the issue.

View solution in original post

5 REPLIES 5

avatar

@Raghav Kumar Gautam- Can you please share HDFS NFS server logs?

avatar
Contributor

From the HDFS NFS logs I see the below exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access time for hdfs is not configured.  Please set dfs.namenode.accesstime.precision configuration parameter.
        at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setTimes(FSDirAttrOp.java:105)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1953)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1360)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:926)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
        at org.apache.hadoop.ipc.Client.call(Client.java:1496)
        at org.apache.hadoop.ipc.Client.call(Client.java:1396)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy12.setTimes(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:901)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

I have set the dfs.access.time.precision value to 360000 in hdfs-site.xml. Please let me know what am I missing here

avatar
Contributor

Hi Raghav, I am also facing same issue.. is it resolved for you?

avatar
@Raghav Kumar Gautam

As from the logs you can see you need to set the dfs.namenode.accesstime.precision value to 3600000. The property dfs.access.time.precision is deprecated. Ambari sets the new one dfs.namenode.accesstime.precision to 0, which disables setting access time. So, setting the new property should resolve the issue.

avatar
New Contributor

In CDP 7.1.8 you cant set access time parameter 

dfs.access.time.precision to zero, it doesnt accept this value while updating the configuration.