Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Configuring ambari views on Kerberized Cluster


Hi Folks,

In the kerberized cluster, we integrated AD for Ambari authentication. Using the AD users, I am able to login to Ambari. But when I log in by default it lands on the views. But When I click any of the views, I see an error.

500 Authentication requiredCollapse Stack Trace Authentication required at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200( 
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry( 
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100( 
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$

While configuring the file view here are the properties I've used :


WebHDFS Username ${username}

WebHDFS Authorization = auth=KERBEROS;proxyuser=admin

Cluster Configuration

Related to the cluster HDFS and name node details.

After Kerberization I created a user "ambari­-user/

And also created a key tab, copied on the ambari -server machine.

Stopped Ambari server and then

$ambari­-server setup­security

Specified the keytab of the Ambari-user (newly created the User in KDC) and started the Ambari-Server.

Trying to access the Ambari -view but getting the above error.

Did any one face similar issue?

I am following the HDP documention section Configuring Ambari User Views with a Secure Cluster :




@Darpan Patel

If the cluster your views will communicate with is Kerberos-enabled, you need to configure the Ambari Server instance(s) for Kerberos and be sure to configure the views to work with Kerberos.

View solution in original post


@Darpan Patel

If the cluster your views will communicate with is Kerberos-enabled, you need to configure the Ambari Server instance(s) for Kerberos and be sure to configure the views to work with Kerberos.


@Neeraj Sabharwal, @Eric Walk


Some comments advocate that in HA , Ambari views have issues.

Are there limitations of PIG & HIVE Ambari Views that they cannot work with HDP cluster in High Availability ? Could you please confirm?

@Darpan Patel This thread is getting offtrack from the original question. I don't see HA support for Pig and Hive yet. Please accept one of the answers to close the thread if anyone of the answers did help.



@Neeraj Sabharwal

I tried configuring Hive/PIG views as per the documentation.

If you confirm that in the Keberized cluster and NN Highly available PIG/HIVe views not supported then I will close the thread 🙂

Thank you very much.


So I had a bunch of trouble with these, here are some of the things to note:

  1. When creating the view in Ambari don't use the "Local Ambari Managed Cluster" option, always use the custom when you have a kerberized cluster.
  2. Definitely read the instructions carefully (i.e. this one: per @Neeraj Sabharwal.
  3. Stop Ambari Server, do a kdestroy for the user ambari-server run as, do a kinit for the ambari user using it's proper keytab as the ambari linux user, then start ambari-server again. Do this procedure each time you restart Ambari Server.
  4. For the pig view, there was a known issue where you needed to add: ,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar to your templeton.libjars for WebHCat ( Check your Ambari version...

@Eric Walk Thank you for sharing these details. @jeff @Paul Codding


No worries, I hope some of these things have been fixed since I went through this back in September (#4 should be resolved in Ambari 2.1.2). The Kdestroy/Kinit thing was definitely strange, never did work out why that was needed.


Thanks will check and update in a few hours. 🙂


@Eric Walk, @Neeraj Sabharwal

I could access the File view but still facing the issues with Pig and Hive. Followed the steps of the documentation for Pig/Hive also.

While I am trying to create a new script on Pig. I get the following error. hahdfs hahdfs

For Hive: Client cannot authenticate via:[TOKEN, KERBEROS];


@Darpan Patel

Haven't tried setting that up in a NameNodeHA environment yet, but it seems that it is trying to resolve the reference to the NN Service Name in DNS and failing.

As for the Hive error, I'd suggest stopping ambari-server, doing a kdestroy for the user as which ambari-server runs and a kinit as the ambari-server user before starting it again.


@Eric Walk

For Hive as per your suggestion : I stopped Ambari, did kdestroy, did kinit with the ambariserver keytab and then tried accessing the Hive page. But I still see the same error.

 Failed on local exception: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "gateway/"; destination host is: "NameNode1_Host":8020;

 H020 Could not establish connecton to gateway_Host:10000: org.apache.thrift.transport.TTransportException: Connection refused: 


@Darpan Patel

I would double check those host names and that the ports are open.

Hi @Darpan Patel great content.. I have given you a few points as a reward.


Thanks Mark.


@Darpan Patel

Pig view doesn’t seem to support NN HA. We encountered issues with PIG during our recent upgrade.

In order to fix this, We created 2 Pig views, one for each NN.

+@Predrag Minovic


@Hemant Kumar @Predrag Minovic

I think this is not true for Non Kererberized cluster. I remember configuring Pig view for HA-ed cluster on HDP 2.3, and it was working fine. Though after Kerberization I did not check the Pig views. Yesterday when I checked all are breaking.

@Darpan Patel I'm not sure have you set your Ambari principal correctly. If you use:

WebHDFS Authorization: auth=KERBEROS;proxyuser=admin

Then you need Ambari principal called admin/

However, you said that you created: ambari­-user/

Make sure that proxyuser name is matching the principal's user name. Then, you also need to add the following properties to your custom core-site.xml (assuming the proxyuser name is "admin") and restart HDFS.


Also, to run Pig view you need to add webhcat.proxyuser.admin.groups=* and webhcat.proxyuser.admin.hosts=* to your webhcat-site.xml, and restart Hive. This should be enough to have your views running.

Regarding view other settings, as mentioned by others, use custom settings and set all fields referring to the latest documentation. It's also a good idea to switch, if you can, to the latest version of Ambari- (though 2.2 was released yesterday). If your NN is configured for HA then in Files and Hive view set:

WebHDFS FileSystem URI = webhdfs://nnhalabel:50070 where nnhalabel is the logical name of your NN.

We found that in this settings does't work for the Pig view as @Hemant Kumar said. Finally, to be sure that views support NN HA, you can cause a failover of NNs using for example the "haadmin -failover" command. Regarding Pig view support for NN HA in a non-kerberized cluster we haven't tested that.

I hope this helps.


Thanks @Predrag Minovic

Indeed this is quite detailed. I've a user ambariserver and principal ambariserver/

I also verified following two properties are added in the custom core site.


PIG/Hive view, I've added following two properties in the webhcat-site.xml


Accessing the Hive View we see error.

H020 Could not establish connecton to HiveServer2_HOST:10000:org.apache.thrift.transport.TTransportException
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.