I had the same issue with Spark2 and HDP3.1, using Isilon/OneFS as storage instead of HDFS.
The OneFS service management pack doesn't provide configuration for some of the HDFS parameters that are expected by Spark2 (they aren't available at all in Ambari), such as dfs.datanode.kerberos.principal. Without these parameters Spark2 HistoryServer may fail to start and report errors such as "Failed to specify server's principal name".
I added the following properties to OneFS under Custom hdfs-site:
dfs.datanode.kerberos.principal=hdfs/_HOST@<MY REALM> dfs.datanode.keytab.file=/etc/security/keytabs/hdfs.service.keytab dfs.namenode.kerberos.principal=hdfs/_HOST@<MY REALM> dfs.namenode.keytab.file=/etc/security/keytabs/hdfs.service.keytab
This resolved the initial error. Thereafter I was getting an error of the following form:
Server has invalid Kerberos principal: hdfs/<isilon>.email@example.com, expecting: firstname.lastname@example.org
This was related to cross-realm authentication. Resolved by adding the below setting to custom hdfs-site:
(Reposting my answer from https://stackoverflow.com/questions/35325720/connecting-to-kerberrized-hdfs-java-lang-illegalargumen... )
In OPs case, it might be that the hdfs-site files need to be available when trying to connect to HBase.
If I recall correctly, some HBase clients (such as the NiFi processor) need the Hadoop configuration files core-site and hdfs-site to be specified. If they can't be found or don't have the attributes above, it might cause the same error.