Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Failed to specify server's Kerberos principal name, during connecting to hbase in kerboros cluster.



While connecting to hbase in kerberos cluster I'm getting error below,

aused by: Couldn't setup connection for hbase/ to null at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$ at Method) at at at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest( at at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod( at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod( at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan( at org.apache.hadoop.hbase.client.ScannerCallable.openScanner( at at at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries( at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$ at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$ at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries( ... 4 more Caused by: Failed to specify server's Kerberos principal name at<init>( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600( at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$ at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$ at Method) at at at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams( ... 17 more

My conf files are:-



renew_lifetime = 7d forwardable = true

default_realm = EXAMPLE.COM

ticket_lifetime = 24h

dns_lookup_realm = false

dns_lookup_kdc = false

default_ccache_name = /tmp/krb5cc_%{uid}

#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5

#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5


default = FILE:/var/log/krb5kdc.log

admin_server = FILE:/var/log/kadmind.log

kdc = FILE:/var/log/krb5kdc.log



admin_server =

kdc =


Please help me in this.



I had the same issue with Spark2 and HDP3.1, using Isilon/OneFS as storage instead of HDFS.

The OneFS service management pack doesn't provide configuration for some of the HDFS parameters that are expected by Spark2 (they aren't available at all in Ambari), such as dfs.datanode.kerberos.principal. Without these parameters Spark2 HistoryServer may fail to start and report errors such as "Failed to specify server's principal name".

I added the following properties to OneFS under Custom hdfs-site:

dfs.datanode.kerberos.principal=hdfs/_HOST@<MY REALM>
dfs.datanode.keytab.file=/etc/security/keytabs/hdfs.service.keytab dfs.namenode.kerberos.principal=hdfs/_HOST@<MY REALM>

This resolved the initial error. Thereafter I was getting an error of the following form:

Server has invalid Kerberos principal: hdfs/<isilon>, expecting: hdfs/

This was related to cross-realm authentication. Resolved by adding the below setting to custom hdfs-site:


(Reposting my answer from )


In OPs case, it might be that the hdfs-site files need to be available when trying to connect to HBase.
If I recall correctly, some HBase clients (such as the NiFi processor) need the Hadoop configuration files core-site and hdfs-site to be specified. If they can't be found or don't have the attributes above, it might cause the same error.