Created 02-18-2016 06:41 AM
I am trying to upload files from HUE to HDFS using File Browser. Everytime I upload a large file, in GBs, a temporary but same size file creates in /tmp (eg tmpxxx.upload) which is occupying whole / space and its getting filled up to 100%. My question is how to move these temporary file from /tmp to some other location having sufficient space.
Created 02-18-2016 07:05 AM
This sounds like the issue mentioned here https://github.com/cloudera/hue/issues/304, however I dont know a valid workaround for our Hue version at the moment. I strongly encourage you to use different ways to ingest large amounts of data into your cluster, e.g. separate data ingestion node (+hdfs cmds to move files into hdfs), Nifi, distcp,...
Created 02-18-2016 07:05 AM
This sounds like the issue mentioned here https://github.com/cloudera/hue/issues/304, however I dont know a valid workaround for our Hue version at the moment. I strongly encourage you to use different ways to ingest large amounts of data into your cluster, e.g. separate data ingestion node (+hdfs cmds to move files into hdfs), Nifi, distcp,...
Created 02-18-2016 07:18 AM
Thanks for the link. This issue is being faced by one of our teams and they have only access to Hue and not the command line. They are uploading archived zip files to HDFS and the size ranges from 2GB to 900GB. I already told them that Hue is not meant to upload such big files in HDFS and I am not able to find the background functionality of fileuploader anywhere online.
Created 02-18-2016 01:35 PM
@Anshul Sisodia you may want to begin transitioning from Hue to Ambari Views. There is a File Browser view you can use to upload files.
Created 02-19-2016 11:43 AM
@Anshul Sisodia Stick with File view. It uses webhdfs protocol.