Support Questions

Find answers, ask questions, and share your expertise

Move file from one HDFS directoy to another using scala/java

I've files in one hdfs folder and after checking few things i wanted to move that file to another directory on hdfs.

Currently i am using filesystem object with rename it is doing the job but it is actually renaming the file with complete path.

Do have any other way to do it?

Appriciate your help.

Thanks,

1 ACCEPTED SOLUTION

Expert Contributor

@RAUI

The answer is no. Renaming is the way to move files on HDFS: FileSystem.rename(). Actually, this is exactly what the HDFS shell command "-mv" does as well, you can check it in the source code. If you think about it, it's pretty logical, since when you move a file on the distributed file system, you don't really move any blocks of the file, you just update the "path" metadata of the file in the NameNode.

View solution in original post

12 REPLIES 12

Mentor

@RAUI

Please can you give a concrete example of what you intend to do because someone cannot conceptualize with your explanation

@Geoffrey Shelton Okot, I have few files in hdfs directory. Simply wanted to move files from one hdfs directory to another.

For example: Have file abc.txt in pqr directory wanted to move file to lmn directory.

/apps/pqr/abc.txt move abc.txt to /apps/lmn/abc.txt

Mentor

@RAUI

To copy files between HDFS directories you need to have the correct permissions i.e in your example /apps/pqr/abc.txt move abc.txt to /apps/lmn/abc.txt.

I assume the HDFS directory owners as pqr and lmn respectively where the former has to have write permission to /apps/lmn/ else you run the copy command ad the HDFS superuser hdfs and then change the file permissions like demonstrated below.

Switch to hfds users

# su - hdfs

Now copy the abc.txt from source to destination

$ hdfs dfs -cp  /apps/pqr/abc.txt    /apps/lmn/

check the permissions see the example

$ hdfs dfs -ls /apps/lmn
Found 3 items
drwxr-xr-x+  - lmn  hdfs          0 2018-05-24 00:40 /user/lmn/acls
drwxr-xr-x+  - hdfs  hdfs          0 2018-05-24 00:40 /user/lmn/abc.txt
-rw-r--r--   3 lmn  hdfs        642 2018-05-24 08:45 /user/lmn/derby.log

Change the file permissions recursively for the directory, this should also change the ownership of abc.txt

$ hdfs dfs -chown  -R lmn /apps/lmn

I hope that helps

@Geoffrey Shelton Okot, Thanks for your time but i was not looking for command line option(knows everyone).

Expert Contributor

@RAUI

The answer is no. Renaming is the way to move files on HDFS: FileSystem.rename(). Actually, this is exactly what the HDFS shell command "-mv" does as well, you can check it in the source code. If you think about it, it's pretty logical, since when you move a file on the distributed file system, you don't really move any blocks of the file, you just update the "path" metadata of the file in the NameNode.

@gnovak Thanks for getting my Question correctly. and the same has been done by me in my scala code. However thought to have others opinion on this.

@gnovak, In order to satisfy my need i am doing FileSystem.rename(src,tgt). If target path is not exists will it create?

My understanding is, it will create the target path, however in my case i am able to move file as expected on my local machine and the same code has been deployed on cluster but i am able to move file to desired location. It is not giving me any exception but simply not doing the job.

Expert Contributor

@RAUI No, it won't create it, the target directory must exist. However, if the target directory doesn't exist, it won't throw an exception, it will only indicate the error via the return value (as described in the documentation).

So 1) you should create the target directory before you call rename() and 2) you should check the return value, like this:

fs.mkdirs(new Path("/your/target/path"));
boolean result = fs.rename(
    new Path("/your/source/path/your.file"),
    new Path("/your/target/path/your.file"));
if (!result) {
  ...
}

@gnovak thanks for you time 🙂

@gnovak, I am still wondering why it has created the directory on my local machine? Kind of wired...

Related to this i have another issue, i am also reading files from hdfs directory using wholeTextFile() my hdfs input directory has text files and sub directories in it. On my local development machine i was able to read the files where wholeTextFile() was not considering sub directories, however whenever i deployed the same code cluster, it started to consider sub directories as well. Do you have any idea on this? Appreciate your help on this

Expert Contributor
@RAUI wholeTextFile() is not part of the HDFS API, I'm assuming you're using Spark, with which I'm not too familiar. I suggest you to post another question for this to HCC.

New Contributor

@RAUI

Yes there is another way of achieving this. You can use the method copy() from the FileUtil class and pass your FileSystem object to it to effectively copy your files from the source HDFS location to the target. As with using rename() you will need to ensure you target directory is created before calling copy. FileUtil.copy() has a signature where you provide a source and destination FS and in this case you would provide the same FS object since you are looking to copy files to a different location on the same HDFS. There is also a boolean option to delete the source file after the copy if that fits your use case.

Here is a link to the FileUtil API: http://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/fs/FileUtil.html

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.