Created on 03-02-2016 04:15 PM - edited 09-16-2022 03:06 AM
We are trying to uplaod a file to HDFS that is 80GB. We are writing the file to /opt/Hadoop/tmp/.hdfs-nfs. It works fine with small files, but not with larger ones.
Does anyone have an answer as to where the file should write in temporarly prior to it moving into HDFS?
Is there some sort of other setting we need to consider?
Created 03-02-2016 05:00 PM
@Jeremy Salazar you can upload to HDFS directly, if you're using NFS, it is not designed for larger files of your size, that's on the roadmap for NFSv4. I recommend to zip the file before you upload using -put command.
hdfs dfs -put file /user/username
Created 03-02-2016 05:00 PM
@Jeremy Salazar you can upload to HDFS directly, if you're using NFS, it is not designed for larger files of your size, that's on the roadmap for NFSv4. I recommend to zip the file before you upload using -put command.
hdfs dfs -put file /user/username
Created 03-02-2016 07:31 PM
Sorry I should have specified that we are using Ambari to upload into HDFS.
Created 03-02-2016 07:34 PM
@Jeremy Salazar you mean HDFS Ambari view? That won't work, it's too big of a file to upload via Ambari view. Consider using CLI.