Support Questions
Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

WARN : Your Hadoop installation does not include the StreamCapabilities class from HDFS-11644

New Contributor

Hi all,

I've set up a kerberised HBase cluster over HDFS. When I start HBase on my master node, I get this warning :

DEBUG [Thread-15] util.CommonFSUtils: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder not available, will not use builder API for file creation.

WARN  [Thread-15] util.CommonFSUtils: Your Hadoop installation does not include the StreamCapabilities class from HDFS-11644, so we will skip checking if any FSDataOutputStreams actually support hflush/hsync. If you are running on top of HDFS this probably just means you have an older version and this can be ignored. If you are running on top of an alternate FileSystem implementation you should manually verify that hflush and hsync are implemented; otherwise you risk data loss and hard to diagnose errors when our assumptions are violated.

DEBUG [Thread-15] util.CommonFSUtils: The first request to check for StreamCapabilities came from this stacktrace.
java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.hadoop.hbase.util.CommonFSUtils$StreamCapabilities.<clinit>(CommonFSUtils.java:986)
    at org.apache.hadoop.hbase.util.CommonFSUtils.hasCapability(CommonFSUtils.java:1024)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1082)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:421)
    at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:611)
    at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1407)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:853)
    at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2241)
    at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:567)
    at java.lang.Thread.run(Thread.java:748)

I'm not really sure what it means. I'm running hadoop 3.1.2 and hbase 2.0.5.

2 REPLIES 2

You are running against a version of Hadoop which does not have the expected classes that HBase wants to check. I find it very unlikely that you are using Hadoop 3.1.2 on the HBase classpath.

HBase relies on very specific semantics from the underlying filesystem to guarantee no data loss. This warning is telling you that HBase failed to make this automatic check and that you should investigate this to make sure that you don't experience data loss going forward.

New Contributor

Thanks for your answer @Josh Elser.

I remembered that i've read on my HBase web UI that the hadoop version was 2.7.4 which is weird since I installed 3.1.2.

How can I check the HBase classpath and what should it be set as ?