Created on 04-29-2016 09:24 AM - edited 09-16-2022 03:16 AM
I have installed a HUE on my mac but after I open the HUE and found couple confirguration problem. Anyone can help on these? Thx.
hadoop.hdfs_clusters.default.webhdfs_url | Current value: http://localhost:50070/webhdfs/v1 Failed to access filesystem root |
Resource Manager | Failed to contact an active Resource Manager: ('Connection aborted.', error(61, 'Connection refused')) |
desktop.secret_key | Current value: Secret key should be configured as a random string. All sessions will be lost on restart |
SQLITE_NOT_FOR_PRODUCTION_USE | SQLite is only recommended for small development environments with a few users. |
Hive | Failed to access Hive warehouse: /user/hive/warehouse |
HBase Browser | The application won't work without a running HBase Thrift Server v1. |
Impala | No available Impalad to send queries to. |
Oozie Editor/Dashboard | The app won't work without a running Oozie server |
Pig Editor | The app won't work without a running Oozie server |
Spark | The app won't work without a running Livy Spark Server |
Created 05-08-2016 05:14 AM
Created 05-09-2016 09:19 AM
Created 05-09-2016 09:54 AM
Created 05-15-2016 04:55 AM
i did the configuration steps , but nothing changed....
Created 01-27-2017 07:23 AM
Is WebHdfs configured? e.g. https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh_hue_configure.html#topic_...
Created 03-09-2017 01:18 AM
Hi,Romainr,your problem solved? I have not solved yet. Help me! Thank you!
Created 03-09-2017 01:16 AM
Created 03-09-2017 06:44 PM
Hi,Vivian ,Please help me, thank you! This question bothers me for 3 days!
Created 08-02-2017 01:36 AM
Hi @Spyros, @balance002, @Vivian, @Romainr, I have been having the same problems and have finally now arrived at a solution which I thought I should share here:
My setup: I have Hadoop/HDFS/Yarn running in one docker container and Hue running in a separate docker container (both on the same machine). However, the following should work with any other setup as well, e.g. if you have Hadoop and Hue installed in one container or installed directly on your machine. It is important, however, that you have a running instance of Hadoop/HDFS.
Step 1: Configuration of your HDFS instance
Add the following property to hdfs-site.xml (in my installation located in /usr/local/hadoop/etc/hadoop/hdfs-site.xml):
<property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
Add the following properties to core-site.xml (in my installation located in /usr/local/hadoop/etc/hadoop/core-site.xml):
<property> <name>hadoop.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> </property>
Restart HDFS and Yarn:
cd to the bin directory where the start/stop scripts are located (in my case it's /usr/local/hadoop/sbin/) and call:
./stop-yarn.sh & ./stop-dfs.sh ./start-dfs.sh & ./start-yarn.sh
Step 2: Add new directory and adjust access rights in HDFS
Prerequisite: You can call the hdfs command directly on your HDFS host. If not, add the directory containing the respective binary to your PATH (on my machine: /usr/local/hadoop/bin).
First, change the owner of / to hdfs:
hdfs dfs -chown hdfs:hdfs /
Then create a dir for the user hdfs (This user is used by Hue. It can be configured in the hue.ini via default_hdfs_superuser) and adjust ownership:
hdfs dfs -mkdir /user/hdfs hdfs dfs -chown hdfs:hdfs /user/hdfs
Step 3: Configuration of your Hue instance
Find your hui.ini (instructions here: http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/), in my case it's /hue/desktop/conf/pseudo-distributed.ini.
Find the line
webhdfs_url=http://localhost:50070/webhdfs/v1
and uncomment it, in case it is commented out. Then, if your HDFS instance is not running in the same container or host, change localhost to the IP adress of where your HDFS instance is running.
Step 4 (only if Hue and HDFS are not on the same container/host): Add HDFS host to /etc/hosts
This step depends on your setup: If you have Hue and HDFS running in two containers, you need to link them, i.e. when calling docker run on your Hue container, add the following parameters (I know that --link is deprecated, but it works and is easier to explain):
--link <HDFS-container-ID>:<HDFS-container-ID/Hostname>
If you are not using containers, simply add the following line to the /etc/hosts file of your Hue host:
<IP of HDFS host> <hostname of HDFS host>
Here is why you need this: Hue will first adress the HDFS host via its IP (as you configured it in hue.ini). However, it will later on, for some requests, use the hostname instead of the IP, which can only be resolved by the Hue host if it's in the /etc/hosts file.
Then restart Hue (in my case I simply stop and start the container) and your done 🙂