Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

No FileSystem for scheme: hdfs

No FileSystem for scheme: hdfs

New Contributor

I'm getting this exception when trying to start my HBase master:


2016-01-26 08:08:21,235 INFO org.apache.hadoop.hbase.mob.MobFileCache: MobFileCache is initialized, and the cache size is 1000
2016-01-26 08:08:21,310 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(
	at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(
	at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(
	at org.apache.hadoop.hbase.master.HMaster.main(
Caused by: No FileSystem for scheme: hdfs
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
	at org.apache.hadoop.fs.FileSystem.createFileSystem(
	at org.apache.hadoop.fs.FileSystem.access$200(
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
	at org.apache.hadoop.fs.FileSystem$Cache.get(
	at org.apache.hadoop.fs.FileSystem.get(
	at org.apache.hadoop.fs.Path.getFileSystem(
	at org.apache.hadoop.hbase.util.FSUtils.getRootDir(
	at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(
	at org.apache.hadoop.hbase.master.HMaster.<init>(
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
	at java.lang.reflect.Constructor.newInstance(
	at org.apache.hadoop.hbase.master.HMaster.constructMaster(
	... 5 more


What could be causing this issue?  I've tried adding a HDFS Gateway role to the host but that made no difference.


Re: No FileSystem for scheme: hdfs

Expert Contributor

Hello Conor,


This error is few times occurred classpath to hadoop jars isn't correct. I would also request you to please verify if hbase.rootdir URL is fully qualified (i.e. hdfs:// & is correct.



Re: No FileSystem for scheme: hdfs

Master Guru
Assuming you are running CDH via CM (given you talk of Gateways), this shouldn't ideally happen on a new setup.

I can think of a couple of reasons, but it depends on the mode of installation you are using.

If you are using parcels, ensure that no /usr/lib/hadoop* directories exist anymore on the machine. Their existence may otherwise confuse the classpath-automating scripts into not finding all the relevant jars required for the "hdfs://" scheme service discovery.

What are your outputs for the commands "hadoop classpath" and "ls -ld /opt/cloudera/parcels/CDH"?

Re: No FileSystem for scheme: hdfs

New Contributor

Hello Harsh,


I ran into the same problem as the OP. I found no /usr/lib/hadoop directories on the machine.


The output of hadoop classpath is



The output of ls -ld /opt/cloudera/parcels/CDH is 

/opt/cloudera/parcels/CDH -> CDH-5.12.0-1.cdh5.12.0.p0.29


When running Spark jobs, I am able to solve this issue by adding the /opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/jars/hadoop-hdfs-2.6.0-cdh5.12.0.jar to --jars flag of spark-submit. Hence, I think for some reason the jar is not being loaded into the dependencies automatically by Cloudera Manager. Would you know of a fix for this?

Re: No FileSystem for scheme: hdfs

Master Guru
A few checks:

- Does the host where you invoke spark-submit carry a valid Spark Gateway role, with deployed configs under /etc/spark/conf/? There's also a classpath file under that location, which you may want to check to see if it includes all HDFS and YARN jars.
- Do you bundle any HDFS/YARN project jars in your Spark App jar (such as a fat-jar assembly)? You may want to check the version matches with what is on the cluster classpath.
- Are there any global environment variables (run 'env' to check) that end in or carry 'CLASSPATH' in their name? Try unsetting these and retrying.

Re: No FileSystem for scheme: hdfs

New Contributor

Hello Harsh,


Thanks for getting back to me. On the checks:


- The host is shown to be commisioned as a Spark Gateway in Cloudera Manager. Under /etc/spark/conf, I see the following files:,, slaves.template, spark-defaults.conf.template,, fairscheduler.xml.template,, spark-defaults.conf,


Is there an explicit classpath file that I should see or are you referring to the SPARK_DIST_CLASSPATH variable that is set in Should I add the hadoop-hdfs-2.6.0-cdh5.12.0.jar to this classpath?


- I don't bundle any project jars in the Spark App. 

- There were no global environment variables using 'env' that ended in or carried 'CLASSPATH' in their name


Re: No FileSystem for scheme: hdfs

Master Guru
Thank you for the added info. I notice now that your 'hadoop classpath' oddly does not mention any hadoop-hdfs library paths.

Can you post an output of 'env' and the contents of your /etc/hadoop/conf/ file from the same host where the hadoop classpath output was generated?

CDH scripts auto-add /opt/cloudera/parcels/CDH/lib/hadoop-hdfs/ paths, unless some environment variables such as HADOOP_HDFS_HOME have been overriden to point to an invalid path. The requested output above is to help check that among other factors that influence the classpath building script.

Re: No FileSystem for scheme: hdfs

New Contributor

Hey Harsh, 


Here is the requested info:



PS1=(py27) \[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$
CONDA_PS1_BACKUP=\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s



# Prepend/Append plugin parcel classpaths

if [ "$HADOOP_USER_CLASSPATH_FIRST" = 'true' ]; then

export HADOOP_MAPRED_HOME=$( ([[ ! '/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce' =~ CDH_MR2_HOME ]] && echo /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce ) || echo ${CDH_MR2_HOME:-/usr/lib/hadoop-map$
export YARN_OPTS="-Xmx825955249 $YARN_OPTS"



Re: No FileSystem for scheme: hdfs

Master Guru
Thank you,

Please try an 'unset HADOOP_HDFS_HOME' and retry your command(s), without
including the hadoop-hdfs jars this time. Does it succeed?

Can you figure out who/what is setting HADOOP_HDFS_HOME env-var in your
user session? This must not be set, as it is self-set to the correct path
by CDH scripts without manual intervention. You can check
.bashrc/.bash_profile to start with, perhaps.

Re: No FileSystem for scheme: hdfs

New Contributor

Hello Harsh,


The following command 'unset HADOOP_HDFS_HOME' did the trick! I am able to run spark-submit without including the hadoop-hdfs jar and also run the command  'hadoop fs -ls' on the local terminal to view the HDFS directories. 


The problem was in my /etc/environment file, which included the following line:



I think I must have inserted the above line following some installation guide, but it was the cause of this issue. Removing that line from the /etc/environment file permanently fixes the issue. I can open a new terminal and run spark-submit without running 'unset HADOOP_HDFS_HOME' first. Thank you so much for helping me fix this!