Support Questions

Find answers, ask questions, and share your expertise

Hue cannot access database, Failed to access filesystem root

avatar
New Contributor

I have installed a HUE on my mac but after I open the HUE and found couple confirguration problem. Anyone can help on these? Thx.

hadoop.hdfs_clusters.default.webhdfs_urlCurrent value: http://localhost:50070/webhdfs/v1
Failed to access filesystem root
Resource ManagerFailed to contact an active Resource Manager: ('Connection aborted.', error(61, 'Connection refused'))
desktop.secret_keyCurrent value:
Secret key should be configured as a random string. All sessions will be lost on restart
SQLITE_NOT_FOR_PRODUCTION_USESQLite is only recommended for small development environments with a few users.
HiveFailed to access Hive warehouse: /user/hive/warehouse
HBase BrowserThe application won't work without a running HBase Thrift Server v1.
ImpalaNo available Impalad to send queries to.
Oozie Editor/DashboardThe app won't work without a running Oozie server
Pig EditorThe app won't work without a running Oozie server
SparkThe app won't work without a running Livy Spark Server
10 REPLIES 10

avatar
New Contributor
I m facing the same problem. Did you had any progress?

avatar
Super Guru
You will need to put the address of the services in hue.ini, that way Hue
can communicate with them
http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/

avatar
Super Guru
Did you configure hue.ini to points to the services?
http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/

avatar
New Contributor

i did the configuration steps , but nothing changed....

avatar
Super Guru

avatar
Contributor

Hi,Romainr,your problem solved? I have not solved yet. Help me! Thank you!

 

avatar
Contributor

Hi,Vivian,My friend, your problem solved? I have not solved yet. Help me! Thank you!

 

 

avatar
Contributor

Hi,Vivian  ,Please help me, thank you! This question bothers me for 3 days!

 

avatar
Explorer

Hi @Spyros, @balance002, @Vivian, @Romainr, I have been having the same problems and have finally now arrived at a solution which I thought I should share here:

My setup: I have Hadoop/HDFS/Yarn running in one docker container and Hue running in a separate docker container (both on the same machine). However, the following should work with any other setup as well, e.g. if you have Hadoop and Hue installed in one container or installed directly on your machine. It is important, however, that you have a running instance of Hadoop/HDFS.

Step 1: Configuration of your HDFS instance
Add the following property to hdfs-site.xml (in my installation located in /usr/local/hadoop/etc/hadoop/hdfs-site.xml):

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>


Add the following properties to core-site.xml (in my installation located in /usr/local/hadoop/etc/hadoop/core-site.xml):

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>


Restart HDFS and Yarn:
cd to the bin directory where the start/stop scripts are located (in my case it's /usr/local/hadoop/sbin/) and call:

./stop-yarn.sh & ./stop-dfs.sh
./start-dfs.sh  & ./start-yarn.sh


Step 2: Add new directory and adjust access rights in HDFS
Prerequisite: You can call the hdfs command directly on your HDFS host. If not, add the directory containing the respective binary to your PATH (on my machine: /usr/local/hadoop/bin).

First, change the owner of / to hdfs:

hdfs dfs -chown hdfs:hdfs /


Then create a dir for the user hdfs (This user is used by Hue. It can be configured in the hue.ini via default_hdfs_superuser) and adjust ownership:

hdfs dfs -mkdir /user/hdfs
hdfs dfs -chown hdfs:hdfs /user/hdfs


Step 3: Configuration of your Hue instance
Find your hui.ini (instructions here: http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/), in my case it's /hue/desktop/conf/pseudo-distributed.ini.
Find the line

webhdfs_url=http://localhost:50070/webhdfs/v1

and uncomment it, in case it is commented out. Then, if your HDFS instance is not running in the same container or host, change localhost to the IP adress of where your HDFS instance is running.

Step 4 (only if Hue and HDFS are not on the same container/host): Add HDFS host to /etc/hosts
This step depends on your setup: If you have Hue and HDFS running in two containers, you need to link them, i.e. when calling docker run on your Hue container, add the following parameters (I know that --link is deprecated, but it works and is easier to explain):

--link <HDFS-container-ID>:<HDFS-container-ID/Hostname>

If you are not using containers, simply add the following line to the /etc/hosts file of your Hue host:

<IP of HDFS host>  <hostname of HDFS host>


Here is why you need this: Hue will first adress the HDFS host via its IP (as you configured it in hue.ini). However, it will later on, for some requests, use the hostname instead of the IP, which can only be resolved by the Hue host if it's in the /etc/hosts file.

Then restart Hue (in my case I simply stop and start the container) and your done 🙂