Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Import data.

Explorer

Hi Guys, how are you? I have a question and I believe you can help me solve it!

I have a folder with txt files that are in a directory "/ home / files" now I need to make this folder available for a hadoop query with hive, I tried to execute the command

"hdfs move -p / home / files | -f / home / new" but it has not worked.

I found in this manual: https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html

9 REPLIES 9

Explorer

@Hugo Cosme are you trying to copy data from you local host to HDFS? The correct command to be used is hdfs dfs -put. You can use this command to copy files from your local system to HDFS, create hive tables with these files and then execute queries on them

Explorer

Hi Paulo, could you give me an example of how I can execute this command?
Another question is .. when accessing the panel of Ambari, I see that I have a large space available .. however when accessing the system if a df -h is executed the size is not even close to what is presented in the Ambari panel.

Ambari:

80597-selection-091.png

Terminal:

80598-selection-092.png

Explorer

check this link for -put examples:

https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-common/FileSystemShell.html#put

Is Ambari-server running in the same system as the DataNode ?

Mentor

@Hugo Cosme

The command you run was wrong hdfs move -p the copy (-cp) or move(-mv) is a subcommand of hdfs dfs to invoke it the see below

$ hdfs dfs
Usage: hadoop fs [generic options]
        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        .........
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
        [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] <path> ...]
        [-cp [-f] [-p | -p[topax]] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] <path> ...]
        [-expunge]
        [-find <path> ... <expression> ...]
        [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getfattr [-R] {-n name | -d} [-e en] <path>]
        [-getmerge [-nl] <src> <localdst>]
        [-help [cmd ...]]
        [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] [-l] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        .......
        [-test -[defsz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touchz <path> ...]
        [-truncate [-w] <length> <path> ...]
        [-usage [cmd ...]]

Just to validate the directories in your hdfs can you do the below

$ hdfs dfs -ls /

Below I am assuming you have a directory or your created one with correct permissions in /user/hugo so if you want to copy a local file to hdfs here are your options: -put ,-cp and -copyFromlocal you can run the same commands while in the source directory with giving the full file path

$ hdfs dfs -copyFromLocal  /home/files   /user/hugo

There is the option for recursively copying please explore

HTH

Explorer

Hi @Geoffrey Shelton Okot, how are you?

Thanks for the tip, this helped me. my problem now is with the file name, it is too big, how can I get around this?

Explorer

@Pedro Andrade

Yes, Ambari is on the same disk .. I'm using an IBM managed service.

Mentor

@Hugo Cosme

Nice to know that help a bit!
Do you mean the file name is too long?

If the file has a very long name then you can you the (*) denotion see example below

Long file name

$ ls
dhdjqhdjqkhewhfwejkfhwejkhewrkjerkhtrjkthrqjkthqrtjkherjkhtkertjerktjerwerter.txt 

Copy using the *

$ hdfs dfs -put * /user/test 

Check the copy was successful

$ hdfs dfs -ls /user/test 
Found 1 items -rw-r--r-- 3 hdfs hdfs 0 2018-07-20 16:06 /user/test/dhdjqhdjqkhewhfwejkfhwejkhewrkjerkhtrjkthrqjkthqrtjkherjkhtkertjerktjerwerter.txt

voila!

Mentor

@Hugo Cosme

Any updates on the copy issue with the long name?

Please update.

Explorer

Hi, @Geoffrey Shelton Okot sorry for the delay in return, the problem still continues .. if I have files with short names it works .. but files with big names generate this error below.


83428-error-copy-hadoop-bigname.png

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.