if we use file transfer via hue there will be image created in /tmp directory and afterwards it will be copied to HDFS.
we need to change hue.ini within kredentials_dir="/tmp" to an other direrctory, due to space issues.
if we change this for example to /data it does not work. we restarted services but it does not show effect.
another question is, if there is a limit for file size to copy with HUE file transfer?
I highly recommend to start using Ambari views. I can't even find any reference to that parameter on the internet except my own hue.ini "2 years old file"
Generally, changing that location and restart the service "should" work. Whats in hue.log?
@michael märz Add to the above "Ambari view leverages webhdfs protocol so it directly copies to HDFS"
webhdfs is using http, ideally there shouldn’t be a limit on the file size
ambari file views is the way to go forward, as far as file limit, I know Chrome doesn't like more than 2GB try with Firefox but if you're uploading to Sandbox, that has it's own limitation. Best way to upload large files is still hdfs dfs -put command. So SCP the file to your edgenode machine (where hdfs client is installed) and then upload to hdfs using cli.
Thank you for quick response
Our user can only access HUE interface, so for the moment thats the only way to load data to HDFS
about which hue.log you´ve talking, i can find some of these logs.
@michael märz If this is urgent, you can uplaod a file to the cluster using WebHdfs directly, without Hue, using for example curl, here is the link. Set the host to your active NN and the port to 50070. It's a 2-step process but you can also try to do it with a single curl command using the -L redirect option. In the first step curl talks to the Name node, and in the second step it PUTs the file to a Date node provided by NN is step 1.
Thank you for your help. is there a public documentation saying that HUE file transfer is not designed to upload large data amounts to HDFS? Our clients wanted to use HUE as file uploader to HDFS. But we experienced many problems that we cant use therefore at all, due to stopped hadoop services etc ..
We need to discuss this concept and change it.