Member since
06-02-2016
4
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
32653 | 05-19-2017 10:48 PM |
11-16-2021
10:28 PM
@BigData-suk, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-28-2018
10:13 PM
@Jacky Hung You should use scp as root from both the source and destination part of the cluster, this should be in a local directory eg /tmp # cd /home
# scp * root@destination:/tmp Then as hdfs the hdfs super user you will have to create a home directory in HDFS for each user you copied earlier Creating the home directory for user1 in hdfs $ hdfs dfs -mkdir /user/user1
$ hdfs dfs -chown user1 /user/user1 Subsequently, If you want to create the subdirectories and change recursively the permission and owner $ hdfs dfs -mkdir -p /user/user1/test/another/final
$ hdfs dfs -chown -R user1 /user/user1/test/another/final Then as the HDFS user go to the directory when you scp'ed earlier eg. /tmp $ cd /tmp
$ hdfs dfs -cp user1_objects /user/user1 or
$ hdfs dfs -cp user1_objects /user/user1/test/another/final
Check the permission and ownership $ hdfs dfs -ls /user/user1 You will need to do this for all the other users However its unfortunate you can't use DISTCP as the source isn't Hadoop. Hope that helps
... View more