- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
/usr/bin/hadoop: Argument list too long during hadoop fs -put
- Labels:
-
Apache Hadoop
Created ‎10-19-2018 01:31 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have around 1 million small files that i wanted to transfer from local to hdfs. I am getting the below error.
Command: hadoop fs -put path/* hdfspath/
/usr/bin/hadoop: Argument list too long
Options tried
1. Increased GCOverheadLimit
2. increased ulimit -s to 102 mb.
None of the options were successful. Can anyone please assist me?
Created ‎10-24-2018 06:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can i get any help on this?
Created ‎10-25-2018 07:32 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I haven't seen that error before but I bet you can probably get around with it by using find & xargs.
For example can you give this a try?
find /your/dir -name '*.txt' -print0|xargs -0 -P 4 -I % hadoop fs -put % /your/hdfs/destination
Let me know if that helps. If it resolves your problem, please take a moment to log in and click accept answer 🙂
PS. You can tweak with the -P flag to increase/decrease performance using parallelism. More described here.
