Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1311 | 08-15-2017 08:15 AM | |
6141 | 01-24-2017 06:58 AM | |
1610 | 08-03-2016 06:45 AM | |
2912 | 06-01-2016 10:08 PM | |
2495 | 04-07-2016 10:30 AM |
03-23-2017
12:08 AM
[root@node1 ~]# curl -i --negotiate -u : 'http://node1.localdomain:50070/webhdfs/v1/tmp/?op=LISTSTATUS'
HTTP/1.1 401 Authentication required
Cache-Control: must-revalidate,no-cache,no-store
Date: Thu, 23 Mar 2017 00:05:33 GMT
Pragma: no-cache
Date: Thu, 23 Mar 2017 00:05:33 GMT
Pragma: no-cache
Content-Type: text/html; charset=iso-8859-1
WWW-Authenticate: Negotiate
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Content-Length: 1408
Server: Jetty(6.1.26.hwx)
HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)
Cache-Control: must-revalidate,no-cache,no-store
Date: Thu, 23 Mar 2017 00:05:33 GMT
Pragma: no-cache
Date: Thu, 23 Mar 2017 00:05:33 GMT
Pragma: no-cache
Content-Type: text/html; charset=iso-8859-1
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Content-Length: 1532
Server: Jetty(6.1.26.hwx)
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /webhdfs/v1/tmp/. Reason:
<pre> GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
...
I'm getting this. Also if I use a delegation token, it works, but normal user wouldn't not know how to get own delegation token... 😞
... View more
03-22-2017
10:30 PM
1 Kudo
Has anyone made HAProxy work with Kerberos-ed WebHDFS for HA?
I've been trying to but couldn't make it work.
Now I'm testing with the simplest haproxy.cfg like below ...
frontend main *:50070
default_backend app
backend app
server node2 node2.localdomain:50070 check
Also spnego.servie.keytab on NamaNode is: [root@node2 keytabs]# klist -k spnego.service.keytab
Keytab name: FILE:spnego.service.keytab
KVNO Principal
---- --------------------------------------------------------------------------
1 HTTP/node1.localdomain@HO-UBU02
1 HTTP/node1.localdomain@HO-UBU02
1 HTTP/node1.localdomain@HO-UBU02
1 HTTP/node1.localdomain@HO-UBU02
1 HTTP/node2.localdomain@HO-UBU02
1 HTTP/node2.localdomain@HO-UBU02
1 HTTP/node2.localdomain@HO-UBU02
1 HTTP/node2.localdomain@HO-UBU02
And getting "HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)"
Or which tool/software would you use for WebHDFS with Kerberos for HA if no Knox and no hadoop-httpfs?
... View more
Labels:
- Labels:
-
Apache Hadoop
03-16-2017
09:29 PM
> If you want to retain the history After taking a backup, how could I restore?
... View more
03-16-2017
09:27 PM
Maybe: su -l <FALCON_USER> cd /usr/hdp/current/falcon-server
... View more
03-06-2017
12:30 AM
Hi @dvillarreal I'm just wondering if I need to use a namenode service ID for NAMENODE role to use webHDFS?
... View more
02-19-2017
09:31 PM
Do I need to use "*" to increase memlock? Could I use "hdfs" instead?
... View more
02-14-2017
03:04 AM
One question please: ----------------------- #HDP QA cluster Kadmin.local : addprinc krbtgt/ HDPDQA.QA.COM@ HDPDEV.QA.COM ----------------------- Is above needed? or correct?
... View more
02-07-2017
09:44 PM
yes, it was my stupid typo. Thanks!
... View more