Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to copy files from HDFS to mounted device on Local FS

Unable to copy files from HDFS to mounted device on Local FS

Explorer

We have a mounted device on our client machine which can be accessed by both windows and Unix-environments(client machine) as a root folder(/nas_store).

We are able to do -get or -copyToLocal to our local home directories like /home/abhinay/, but unable to copy to /nas_store, got an error as

get: Operation not permitted

Can any one suggest, Any changes has to be made in hadoop config file?

6 REPLIES 6

Re: Unable to copy files from HDFS to mounted device on Local FS

Master Guru
Try running your command with the TRACE logger level setup:

export HADOOP_ROOT_LOGGER=TRACE,console
hadoop fs -get /hdfs/file /nas/path/

Are you also certain the user you try to run this as has adequate permissions to write into the target?
Highlighted

Re: Unable to copy files from HDFS to mounted device on Local FS

Explorer

Hi I tried by setting the debug

 

I got below console trace

 

hadoop fs -get /user/grfe/test

 

Spoiler
16/01/29 07:57:32 DEBUG util.Shell: setsid exited with exit code 0
16/01/29 07:57:32 DEBUG conf.Configuration: parsing URL jar:file:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/hadoop-common-2.6.0-cdh5.4.2.jar!/core-default.xml
16/01/29 07:57:32 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@6c164690
16/01/29 07:57:32 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/conf.cloudera.yarn/core-site.xml
16/01/29 07:57:32 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@45e70a5a
16/01/29 07:57:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
16/01/29 07:57:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
16/01/29 07:57:32 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[GetGroups], always=false, type=DEFAULT, sampleName=Ops)
16/01/29 07:57:32 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
16/01/29 07:57:32 DEBUG security.Groups:  Creating new Groups object
16/01/29 07:57:32 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000; warningDeltaMs=5000
16/01/29 07:57:32 DEBUG security.UserGroupInformation: hadoop login
16/01/29 07:57:32 DEBUG security.UserGroupInformation: hadoop login commit
16/01/29 07:57:32 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: grfe
16/01/29 07:57:32 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: grfe" with name grfe
16/01/29 07:57:32 DEBUG security.UserGroupInformation: User entry: "grfe"
16/01/29 07:57:32 DEBUG security.UserGroupInformation: UGI loginUser:grfe (auth:SIMPLE)
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
16/01/29 07:57:32 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://nameservice1
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
16/01/29 07:57:32 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
16/01/29 07:57:32 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
16/01/29 07:57:32 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@36fce9d7
16/01/29 07:57:32 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@734a81fb
16/01/29 07:57:33 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
16/01/29 07:57:33 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
16/01/29 07:57:33 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$1@37420057: starting with interruptCheckPeriodMs = 60000
16/01/29 07:57:33 TRACE unix.DomainSocketWatcher: DomainSocketWatcher(1817643324): adding notificationSocket 155, connected to 154
16/01/29 07:57:33 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
16/01/29 07:57:33 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
16/01/29 07:57:33 TRACE ipc.ProtobufRpcEngine: 1: Call -> lhrrhegapp026.enterprisenet.org/10.90.50.36:8020: getFileInfo {src: "/user/grfe/test"}
16/01/29 07:57:33 DEBUG ipc.Client: The ping interval is 60000 ms.
16/01/29 07:57:33 DEBUG ipc.Client: Connecting to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe: starting, having connections 1
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe sending #0
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe got value #0
16/01/29 07:57:33 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 153ms
16/01/29 07:57:33 TRACE ipc.ProtobufRpcEngine: 1: Response <- lhrrhegapp026.enterprisenet.org/10.90.50.36:8020: getFileInfo {fs { fileType: IS_FILE path: "" length: 34 permission { perm: 438 } owner: "grfe" group: "grfe" modification_time: 1454072096250 access_time: 1454072096030 block_replication: 3 blocksize: 134217728 fileId: 2238492 childrenNum: 0 storagePolicy: 0 }}
16/01/29 07:57:33 TRACE ipc.ProtobufRpcEngine: 1: Call -> lhrrhegapp026.enterprisenet.org/10.90.50.36:8020: getBlockLocations {src: "/user/grfe/test" offset: 0 length: 1342177280}
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe sending #1
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe got value #1
16/01/29 07:57:33 DEBUG ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms
16/01/29 07:57:33 TRACE ipc.ProtobufRpcEngine: 1: Response <- lhrrhegapp026.enterprisenet.org/10.90.50.36:8020: getBlockLocations {locations { fileLength: 34 blocks { b { poolId: "BP-1678753434-10.90.50.35-1443072063430" blockId: 1075179851 generationStamp: 1440553 numBytes: 34 } offset: 0 locs { id { ipAddr: "10.90.50.38" hostName: "lhrrhegapp028.enterprisenet.org" datanodeUuid: "a2a69514-b1c3-4e5f-9440-ef23f17f2959" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4905367643925 remaining: 16820301402601 blockPoolUsed: 4905367643925 lastUpdate: 1454072251642 xceiverCount: 10 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } locs { id { ipAddr: "10.90.50.36" hostName: "lhrrhegapp026.enterprisenet.org" datanodeUuid: "eff94a9c-520f-4b56-9b18-c0cc4d061b99" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4888710815233 remaining: 16836645290441 blockPoolUsed: 4888710815233 lastUpdate: 1454072252599 xceiverCount: 6 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } locs { id { ipAddr: "10.90.50.39" hostName: "lhrrhegapp029.enterprisenet.org" datanodeUuid: "8bc97cd5-0603-4195-91a4-f534b21eca93" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4733740282405 remaining: 16992415831649 blockPoolUsed: 4733740282405 lastUpdate: 1454072252381 xceiverCount: 8 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } corrupt: false blockToken { identifier: "" password: "" kind: "" service: "" } isCached: false isCached: false isCached: false storageTypes: DISK storageTypes: DISK storageTypes: DISK storageIDs: "DS-e94ca06b-0787-44a4-a6e7-6881d7ec4ab4" storageIDs: "DS-58bf5d34-4505-4117-a145-4af53b8e5eed" storageIDs: "DS-9dfad515-1207-45b5-a9d0-1c78ffc94fd6" } underConstruction: false lastBlock { b { poolId: "BP-1678753434-10.90.50.35-1443072063430" blockId: 1075179851 generationStamp: 1440553 numBytes: 34 } offset: 0 locs { id { ipAddr: "10.90.50.39" hostName: "lhrrhegapp029.enterprisenet.org" datanodeUuid: "8bc97cd5-0603-4195-91a4-f534b21eca93" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4733740282405 remaining: 16992415831649 blockPoolUsed: 4733740282405 lastUpdate: 1454072252381 xceiverCount: 8 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } locs { id { ipAddr: "10.90.50.36" hostName: "lhrrhegapp026.enterprisenet.org" datanodeUuid: "eff94a9c-520f-4b56-9b18-c0cc4d061b99" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4888710815233 remaining: 16836645290441 blockPoolUsed: 4888710815233 lastUpdate: 1454072252599 xceiverCount: 6 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } locs { id { ipAddr: "10.90.50.38" hostName: "lhrrhegapp028.enterprisenet.org" datanodeUuid: "a2a69514-b1c3-4e5f-9440-ef23f17f2959" xferPort: 50010 infoPort: 50075 ipcPort: 50020 infoSecurePort: 0 } capacity: 21730938363904 dfsUsed: 4905367643925 remaining: 16820301402601 blockPoolUsed: 4905367643925 lastUpdate: 1454072251642 xceiverCount: 10 location: "/default" adminState: NORMAL cacheCapacity: 15605956608 cacheUsed: 0 } corrupt: false blockToken { identifier: "" password: "" kind: "" service: "" } isCached: false isCached: false isCached: false storageTypes: DISK storageTypes: DISK storageTypes: DISK storageIDs: "DS-9dfad515-1207-45b5-a9d0-1c78ffc94fd6" storageIDs: "DS-58bf5d34-4505-4117-a145-4af53b8e5eed" storageIDs: "DS-e94ca06b-0787-44a4-a6e7-6881d7ec4ab4" } isLastBlockComplete: true }}
16/01/29 07:57:33 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
  fileLength=34
  underConstruction=false
  blocks=[LocatedBlock{BP-1678753434-10.90.50.35-1443072063430:blk_1075179851_1440553; getBlockSize()=34; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.90.50.38:50010,DS-e94ca06b-0787-44a4-a6e7-6881d7ec4ab4,DISK], DatanodeInfoWithStorage[10.90.50.36:50010,DS-58bf5d34-4505-4117-a145-4af53b8e5eed,DISK], DatanodeInfoWithStorage[10.90.50.39:50010,DS-9dfad515-1207-45b5-a9d0-1c78ffc94fd6,DISK]]}]
  lastLocatedBlock=LocatedBlock{BP-1678753434-10.90.50.35-1443072063430:blk_1075179851_1440553; getBlockSize()=34; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.90.50.39:50010,DS-9dfad515-1207-45b5-a9d0-1c78ffc94fd6,DISK], DatanodeInfoWithStorage[10.90.50.36:50010,DS-58bf5d34-4505-4117-a145-4af53b8e5eed,DISK], DatanodeInfoWithStorage[10.90.50.38:50010,DS-e94ca06b-0787-44a4-a6e7-6881d7ec4ab4,DISK]]}
  isLastBlockComplete=true}
16/01/29 07:57:33 DEBUG nativeio.NativeIO: Initialized cache for IDs to User/Group mapping with a  cache timeout of 14400 seconds.
get: Operation not permitted
16/01/29 07:57:33 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@734a81fb
16/01/29 07:57:33 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@734a81fb
16/01/29 07:57:33 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@734a81fb
16/01/29 07:57:33 DEBUG ipc.Client: Stopping client
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe: closed
16/01/29 07:57:33 DEBUG ipc.Client: IPC Client (2074563683) connection to lhrrhegapp026.enterprisenet.org/10.90.50.36:8020 from grfe: stopped, remaining connections 0

 

User name is grfe, yes grfe has the permission to that path.
When I execute  hadoop fs -cat  it works, problem with get & getmerge

Re: Unable to copy files from HDFS to mounted device on Local FS

Explorer

Maybe you don't have write permisson to write to current working directory? Can you try: 

hadoop fs -get /user/grfe/test /tmp/test

Re: Unable to copy files from HDFS to mounted device on Local FS

Explorer

Unix user name is also grfe.

Only grfe user has permission to access that..

hadoop fs -get /user/grfe/test /tmp/test is working fine.

 

As I mentioned hadoop fs -cat is working fine, hadoop fs -put from this folder aslo works.

get & getmerge are not working.

 

Re: Unable to copy files from HDFS to mounted device on Local FS

Explorer

-cat does not writes to current directory, also -put does not write to current directory too. Only command that writes to current director is -get without second argument. If given /tmp/test as second argument and it writes without problems, then as I said before you don't have write permission to current working directory. Just to try it you can run "cd /tmp && hadoop fs -get /user/grfe/test" and it should also work too..

Re: Unable to copy files from HDFS to mounted device on Local FS

Explorer

@scobanx

 

When I say cat i mean to say i tried as

hadoop fs /user/grfe/test > /naspath/test

 

Here I am able to create a file in nas path.

Similar case for put command..

I am able to goto /naspath and then copy the files to hdfs. Just trying say that my unix login has access to the /naspath.

Don't have an account?
Coming from Hortonworks? Activate your account here