Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

distcp copy to local directory

avatar
New Contributor

I want to do a fast copy via distcp to the local file system. I have a one node cluster. Otherwise a NFS dir could be mounted on all datanodes.
The target dir exists.

ls -ld /tmp/backup/h1 drwxrwxrwx 3 datameer datameer 4.0K Sep 29 16:52 /tmp/backup/h1

time hadoop distcp -prbugpct -m 100 -overwrite /wol/h1 file:/tmp/backup/h1

17/09/29 16:52:10 INFO mapreduce.Job: Task Id : attempt_1506624084746_0006_m_000001_0, Status : FAILED Error: ExitCodeException exitCode=1: chown: changing ownership of `/tmp/backup/h1/part-m-00000': Operation not permitted

The distcp runs under yarn (mapreduce.framework.name) but is called by a different user.

ls -l /tmp/backup/h1/part-m-00002
-rw-r--r-- 1 yarn hadoop 100M Sep 29 16:52 /tmp/backup/h1/part-m-00002

I could set mapreduce.framework.name to local, but then it is not faster than a HDFS FS shell command.

4 REPLIES 4

avatar
New Contributor

Happens only with preserve.
Stack of the error is
at org.apache.hadoop.util.Shell.runCommand(Shell.java:933) at org.apache.hadoop.util.Shell.run(Shell.java:844) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1123) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1217) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1199) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1077) at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:879) at org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:746) at org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:519) at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:488) at org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:516) at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:257)

avatar
Super Guru
@Wolfgang nobody

Can you give "yarn" user the right to impersonate in core-site.xml? Try the following:

<property>
     <name>hadoop.proxyuser.yarn.groups</name>
     <value>group your user is part of. Can be comma separated list or '*' (without quotes) for all</value>
     <description>Allow the superuser yarn to impersonate any members of the group mentioned in group list</description>
   </property>
   <property>
     <name>hadoop.proxyuser.yarn.hosts</name>
     <value>host1,host2</value>
     <description>The superuser can connect only from host1 and host2 to impersonate a user. Make it '*' so it can connect from any host</description>
   </property>

avatar
Expert Contributor

@Wolfgang nobody

In RawLocalFileSystem, setOwner on the file/directory will be allowed only if the user has sufficient privileges.

It seems user (who runs the container) do not have sufficient privileges to change the file ownership.

Please remove preserver [USER(u), GROUP(g), PERMISSION(p)] to copy files to LFS or set ownership to destination file/directory as same as the source.

Example :

$ hadoop distcp hdfs://source file://destination

$ hadoop distcp -pbct hdfs://source file://destination

avatar
New Contributor

Hi,

thanks for your quick responses.

I want to use it for backup so I need the preserve flag.

Further on I do not want to care about users and permissions. The copy has to have the same user and permission settings.

Like using the Hadoop FS shell commands. These commands are quite fast for smaller data, like 100 times a 100 MB file,

but quite slow for large files.

I want to exploit yarn to use all datanodes of the cluster for the backup. Each datanode has mounted the backup volume via nfs.

This test is just on a single node.

The user how runs the backup needs root permission to execute the chown.
The only chance I see is using LinuxContainerExecutor.

But super users are banned from running Yarn jobs.

-rw-r--r-- 1 root   hadoop 1.1K Oct  1 22:02 container-executor.cfg

yarn.nodemanager.local-dirs=/var/hadoop/yarn/local
yarn.nodemanager.log-dirs=/var/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=500

It found an example using

allowed.system.users=mapr

Is this recommended on Hortonworks ? Does not look like as Ambari overwrites the settings.

Mqureshi, I tried your approach as well. Does not help Thank you anyway.

[datameer@dn190 wconf]$ diff core-site.xml.ori core-site.xml
34a35,54
>       <name>hadoop.proxyuser.yarn.groups</name>
>       <value>*</value>
>     </property>
>
>     <property>
>       <name>hadoop.proxyuser.yarn.hosts</name>
>       <value>*</value>
>     </property>
>

[datameer@dn190 ~]$ time hadoop --config wconf distcp  -prbugpct  -m 100 -overwrite /wol/h1 file:/tmp/backup1/h1
17/10/02 12:34:15 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, overwrite=true, skipCRC=false, blocking=true, numListstatusThreads=0, maxMaps=100, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[REPLICATION, BLOCKSIZE, USER, GROUP, PERMISSION, CHECKSUMTYPE, TIMES], preserveRawXattrs=false, atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[/wol/h1], targetPath=file:/tmp/backup1/h1, targetPathExists=true, filtersFile='null'}
17/10/02 12:34:15 INFO impl.TimelineClientImpl: Timeline service address: http://dn190.pf4h.local:8188/ws/v1/timeline/
17/10/02 12:34:15 INFO client.RMProxy: Connecting to ResourceManager at dn190.pf4h.local/192.168.239.190:8050
17/10/02 12:34:16 INFO client.AHSProxy: Connecting to Application History server at dn190.pf4h.local/192.168.239.190:10200
17/10/02 12:34:16 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 97; dirCnt = 0
17/10/02 12:34:16 INFO tools.SimpleCopyListing: Build file listing completed.
17/10/02 12:34:16 INFO tools.DistCp: Number of paths in the copy list: 97
17/10/02 12:34:16 INFO tools.DistCp: Number of paths in the copy list: 97
17/10/02 12:34:17 INFO impl.TimelineClientImpl: Timeline service address: http://dn190.pf4h.local:8188/ws/v1/timeline/
17/10/02 12:34:17 INFO client.RMProxy: Connecting to ResourceManager at dn190.pf4h.local/192.168.239.190:8050
17/10/02 12:34:17 INFO client.AHSProxy: Connecting to Application History server at dn190.pf4h.local/192.168.239.190:10200
17/10/02 12:34:17 INFO mapreduce.JobSubmitter: number of splits:97
17/10/02 12:34:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1506885218764_0007
17/10/02 12:34:17 INFO impl.YarnClientImpl: Submitted application application_1506885218764_0007
17/10/02 12:34:17 INFO mapreduce.Job: The url to track the job: http://dn190.pf4h.local:8088/proxy/application_1506885218764_0007/
17/10/02 12:34:17 INFO tools.DistCp: DistCp job-id: job_1506885218764_0007
17/10/02 12:34:17 INFO mapreduce.Job: Running job: job_1506885218764_0007
17/10/02 12:34:23 INFO mapreduce.Job: Job job_1506885218764_0007 running in uber mode : false
17/10/02 12:34:23 INFO mapreduce.Job:  map 0% reduce 0%
17/10/02 12:34:26 INFO mapreduce.Job: Task Id : attempt_1506885218764_0007_m_000000_0, Status : FAILED
Error: ExitCodeException exitCode=1: chown: changing ownership of `/tmp/backup1/h1/_SUCCESS': Operation not permitted
[datameer@dn190 ~]$ ll /tmp/backup1/h1/
total 895M
-rw-r--r-- 1 yarn hadoop 100M Oct  2 12:34 part-m-00007
-rw-r--r-- 1 yarn hadoop 100M Oct  2 12:34 part-m-00001
-rw-r--r-- 1 yarn hadoop 100M Oct  2 12:34 part-m-00000
-rw-r--r-- 1 yarn hadoop 100M Oct  2 12:34 part-m-00004
: