I could access the File view but still facing the issues with Pig and Hive. Followed the steps of the documentation for Pig/Hive also.
While I am trying to create a new script on Pig. I get the following error.
java.net.UnknownHostException: hahdfs java.net.UnknownHostException: hahdfs at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS];
Haven't tried setting that up in a NameNodeHA environment yet, but it seems that it is trying to resolve the reference to the NN Service Name in DNS and failing.
As for the Hive error, I'd suggest stopping ambari-server, doing a kdestroy for the user as which ambari-server runs and a kinit as the ambari-server user before starting it again.
For Hive as per your suggestion : I stopped Ambari, did kdestroy, did kinit with the ambariserver keytab and then tried accessing the Hive page. But I still see the same error.
Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "gateway/192.168.1.8"; destination host is: "NameNode1_Host":8020; H020 Could not establish connecton to gateway_Host:10000: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused:
@Darpan Patel I'm not sure have you set your Ambari principal correctly. If you use:
WebHDFS Authorization: auth=KERBEROS;proxyuser=admin
Then you need Ambari principal called admin/ambari-Host_name_here@KDCRealm.com
However, you said that you created: ambari-user/ambari-Host_name_here@KDCRealm.com
Make sure that proxyuser name is matching the principal's user name. Then, you also need to add the following properties to your custom core-site.xml (assuming the proxyuser name is "admin") and restart HDFS.
Also, to run Pig view you need to add webhcat.proxyuser.admin.groups=* and webhcat.proxyuser.admin.hosts=* to your webhcat-site.xml, and restart Hive. This should be enough to have your views running.
Regarding view other settings, as mentioned by others, use custom settings and set all fields referring to the latest documentation. It's also a good idea to switch, if you can, to the latest version of Ambari-126.96.36.199 (though 2.2 was released yesterday). If your NN is configured for HA then in Files and Hive view set:
WebHDFS FileSystem URI = webhdfs://nnhalabel:50070 where nnhalabel is the logical name of your NN.
We found that in 188.8.131.52 this settings does't work for the Pig view as @Hemant Kumar said. Finally, to be sure that views support NN HA, you can cause a failover of NNs using for example the "haadmin -failover" command. Regarding Pig view support for NN HA in a non-kerberized cluster we haven't tested that.
I hope this helps.