Member since
08-15-2016
189
Posts
63
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5664 | 01-02-2018 09:11 AM | |
3003 | 12-04-2017 11:37 AM | |
2146 | 10-03-2017 11:52 AM | |
21571 | 09-20-2017 09:35 PM | |
1603 | 09-12-2017 06:50 PM |
01-16-2017
10:05 PM
@Aaron Dossett Hey Aaron, were you using the storm-core HDFSBolt or the trident api? The Trident one should guarantee the action in the face of failures and crashes. Can you elaborate a bit?
... View more
01-12-2017
04:34 PM
@Artem Ervits Thanks Artem, but I actually meant the exact opposite; Storm writing to HDFS, not reading
... View more
01-12-2017
04:21 PM
1 Kudo
Hi, I have a question about Storm Tridents exactly-once semantics and how it would behave in the following scenario: Suppose I have a topology that has 3 outputs to sink to; Kafka topic, Hbase table and a HDFSBolt. When a Trident batch is written to Kafka and HBase you can have strong guarantees that the writes are actually ack'ed or not. But for writes to HDFS you don't have that. So do HDFSBolts boast the very same strong exactly-once guarantee? What would/could be scenario's that result in Trident batches being written twice to HDFS? Or is this a negligible risk? I need to know if there is any reason to built deduplication logic based on the data that lands in HDFS via the Storm Bolt.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Storm
01-12-2017
04:06 PM
1 Kudo
Hi, I need to know if the current file in HDFS that Storm writes to is recognizable as 'in flight' file? For instance Flume marks the in flight files like <filename>.tmp (or something like that). How does Storm do this? Maybe somebody knows just like that, I hope so I don't have to build a test setup myself now. Edit: final goal is to have a batch oriented process take on only completed/closed files.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Storm
01-03-2017
10:58 PM
@Ted Yu Thanks, ah yeah, you know, there is always a reason not to upgrade it seems. Maybe this is another reason to contemplate the upgrade.
... View more
01-03-2017
10:18 PM
Hi, I read this recent doc on Hbase MOB which is available as of HDP2.5. It occurred to me that purely judging on the Hbase version on Ambari > Stack Versions there is no difference in the HBase versions shipped with HDP2.4 and 2.5, both say 1.1.2 HDP2.4:
hbase(main):001:0> version
1.1.2.2.4.0.0-169, r61dfb2b344f424a11f93b3f086eab815c1eb0b6a, Wed Feb 10 07:08:51 UTC 2016
HDP2.5:
hbase(main):001:0> version
1.1.2.2.5.0.0-1245, r53538b8ab6749cbb6fdc0fe448b89aa82495fb3f, Fri Aug 26 01:32:27 UTC 2016 Today I tested that the MOB features are really not shipped with the HDP2.4 Hbase which is unfortunate since a site wants to use MOB support on a HDP2.4 cluster. So here is the question: Is there much between the HBase versions of HDP2.4 / 2.5? Could the 2.5 HBase jars be transplanted to HDP 2.4 just to use the new MOB features? Or is this 'definitely not recommended' 🙂
... View more
Labels:
- Labels:
-
Apache HBase
12-30-2016
08:27 PM
Well eventually I was able to solve all this. I did multiple things, don't know exactly what solved it. -Installed Mac_OS_X_10.4_10.6_Kerberos_Extras.dmg -Upgraded Firefox to 50.1.0 from 49.x -Reset value of 'network.negotiate-auth.trusted-uris' in Firefox about:config to '.field.hortonworks.com' -Mapped all cluster nodes short and long fqdn in local /etc/hosts like 1xx.2x.x3x.220 sg-hdp24-mst6b sg-hdp24-mst6b.field.hortonworks.com The local Kerberos config at /etc/krb5.conf has to have both REALMS: [libdefaults]
default_realm = MIT.KDC.COM
[domain_realm]
.field.hortonworks.com = MIT.KDC.COM
field.hortonworks.com = MIT.KDC.COM
[realms]
FIELD.HORTONWORKS.COM = {
admin_server = xxxx.field.hortonworks.com
kdc = ad01.field.hortonworks.com
}
MIT.KDC.COM = {
admin_server = sg-hdp24-mst6b.field.hortonworks.com
kdc = sg-hdp24-mst6b.field.hortonworks.com
}
Both curl and webhdfs calls from Firefox work now. After such a successful call local cache looks like this: $ klist
Credentials cache: API:C1AAF010-41BB-4705-B4FB-239BC06DCF8E
Principal: jk@FIELD.HORTONWORKS.COM
Issued Expires Principal
Dec 30 20:34:42 2016 Dec 31 06:34:42 2016 krbtgt/FIELD.HORTONWORKS.COM@FIELD.HORTONWORKS.COM
Dec 30 20:34:49 2016 Dec 31 06:34:42 2016 krbtgt/MIT.KDC.COM@FIELD.HORTONWORKS.COM
Dec 30 20:34:49 2016 Dec 31 06:34:42 2016 HTTP/sg-hdp24-mst7@MIT.KDC.COM
So now the cross realm trust cluster MIT --> AD is fully functional. One peculiar thing was that in Firefox the SPNEGO auth works just as well now for the destination 'http://sg-hdp24-mst7:50070/webhdfs/v1/?op=LISTSTATUS' as it is for http://sg-hdp24-mst7.field.hortonworks.com:50070/webhdfs/v1/?op=LISTSTATUS. So somehow Firefox figured out it needed to use Kerberos to auth without the domain indicator ('network.negotiate-auth.trusted-uris')
... View more
12-29-2016
04:42 PM
3 Kudos
Running a hadoop client on Mac OS X and connect to a Kerberized cluster poses some extra challenges.
I suggest to use brew, the Mac package manager to conveniently install the Hadoop package:
$ brew search hadoop
$ brew install hadoop
This will install the latest (apache) Hadoop distro, (2.7.3 at the time of writing). Minor version differences to your HDP version will not matter.
You may test the installation by running a quick 'hdfs dfs -ls / ' on HDFS. Without further configuration a local single node 'cluster' will be assumed.
We now have to point the client to the real HDP cluster. In order to do so you need to copy the full contents of the config files below from any HDP node:
Source:
/etc/hadoop/{hdp-version}/0/hadoop-env.sh
/etc/hadoop/{hdp-version}/0/core-site.xml
/etc/hadoop/{hdp-version}/0/hdfs-site.xml
/etc/hadoop/{hdp-version}/0/yarn-site.xml
Target:
/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop/hadoop-env.sh
/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop/core-site.xml
/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop/hdfs-site.xml
/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop/yarn-site.xml
If we now try to access the Kerberized cluster we get an error like below:
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 28 more
Sure, we need to kinit first so we do:
$ kinit test@A.EXMAPLE.COM
test@A.EXMAPLE.COM's password:
$ hdfs dfs -ls /
We still get the same error, so what is going on?
It makes sense to add this extra option (-Dsun.security.krb5.debug=true) to hadoop-env.sh now, to enable Kerberos debug log output :
export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true ${HADOOP_OPTS}"
Now the debug output provides some clues:
$ hdfs dfs -ls /
Java config name: null
Native config name: /Library/Preferences/edu.mit.Kerberos
Loaded from native config
16/12/29 17:02:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>>>KinitOptions cache name is /tmp/krb5cc_502
>> Acquire default native Credentials
default etypes for default_tkt_enctypes: 23 16.
>>> Found no TGT's in LSA
By default the HDFS clients looks for Kerberos tickets at /tmp/krb5cc_502 where '502' is the variable uid of the relevant user. The other thing to look at is 'Native config name: /Library/Preferences/edu.mit.Kerberos' , this is where your local Kerberos configs are sourced from. Another valid config source would be '/etc/krb5.conf ' depending on your local installation. You can source and mirror this local config from any HDP nodes from the /etc/krb5.conf file.
Now if we look at the default ticket cache on a Mac OS X it seems to point to another location:
$ klist
Credentials cache: API:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
Principal: test@A.EXMAPLE.COM
Issued Expires Principal
Dec 29 17:02:45 2016 Dec 30 03:02:45 2016 krbtgt/A.EXMAPLE.COM@A.EXMAPLE.COM
The pointer 'API:XXXXXX-XXXXX-XXXX-XXXXX' signals Mac OS X' memory-based credential cache for Kerberos. On a nix distro it would typically say something like 'Ticket cache: FILE:/tmp/krb5cc_502'. The location to store the ticket cache can be set by the environment variable KRB5CCNAME (FILE: / DIR: / API: / KCM: / MEMORY:) but that is beyond the scope of this article. This is why the HDFS client could not find any ticket.
If the HDFS client looks for the ticket cache at '/tmp/krbcc_502' we can simply make Mac OS X cache a validated Kerberos ticket there like this:
$ kinit -c FILE:/tmp/krb5cc_502 test@A.EXMAPLE.COM
test@A.EXMAPLE.COM's password:
Or likewise with a keytab:
$ kinit -c FILE:/tmp/krb5cc_502 -kt ~/Downloads/smokeuser.headless.keytab ambari-qa-socgen_shadow@MIT.KDC.COM
Check the ticket cache the same way:
$ klist -c /tmp/krb5cc_502
Credentials cache: FILE:/tmp/krb5cc_502
Principal: test@A.EXMAPLE.COM
Issued Expires Principal
Dec 29 17:31:29 2016 Dec 30 03:31:29 2016 krbtgt/A.EXMAPLE.COM@A.EXMAPLE.COM
If you try to list hdfs again now it should look something like this:
$ hdfs dfs -ls /user
Java config name: null
Native config name: /Library/Preferences/edu.mit.Kerberos
Loaded from native config
16/12/29 17:34:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>>>KinitOptions cache name is /tmp/krb5cc_502
>>>DEBUG <CCacheInputStream> client principal is test@A.EXMAPLE.COM
>>>DEBUG <CCacheInputStream> server principal is krbtgt/A.EXMAPLE.COM@A.EXMAPLE.COM
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Thu Dec 29 17:31:29 CET 2016
>>>DEBUG <CCacheInputStream> start time: Thu Dec 29 17:31:29 CET 2016
>>>DEBUG <CCacheInputStream> end time: Fri Dec 30 03:31:29 CET 2016
>>>DEBUG <CCacheInputStream> renew_till time: Thu Jan 05 17:31:27 CET 2017
>>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; PRE_AUTH;
>>>DEBUG <CCacheInputStream> client principal is test@A.EXMAPLE.COM
>>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/A.EXAMPLE.COM@A.EXAMPLE.COM@MIT.KDC.COM
>>>DEBUG <CCacheInputStream> key type: 0
>>>DEBUG <CCacheInputStream> auth time: Thu Dec 29 17:31:21 CET 2016
>>>DEBUG <CCacheInputStream> start time: null
>>>DEBUG <CCacheInputStream> end time: Thu Dec 29 17:31:21 CET 2016
>>>DEBUG <CCacheInputStream> renew_till time: null
>>> CCacheInputStream: readFlags()
>>> KrbCreds found the default ticket granting ticket in credential cache.
>>> Obtained TGT from LSA: Credentials:
client=test@A.EXMAPLE.COM
server=krbtgt/A.EXMAPLE.COM@A.EXMAPLE.COM
authTime=20161229163129Z
startTime=20161229163129Z
endTime=20161230023129Z
renewTill=20170105163127Z
flags=FORWARDABLE;RENEWABLE;INITIAL;PRE-AUTHENT
EType (skey)=18
(tkt key)=18
16/12/29 17:34:30 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Found ticket for test@A.EXAMPLE.COM to go to krbtgt/A.EXAMPLE.COM@A.EXAMPLE.COM expiring on Fri Dec 30 03:31:29 CET 2016
Entered Krb5Context.initSecContext with state=STATE_NEW
Found ticket for test@A.EXAMPLE.COM to go to krbtgt/A.EXAMPLE.COM@A.EXAMPLE.COM expiring on Fri Dec 30 03:31:29 CET 2016
Service ticket not found in the subject
>>> Credentials acquireServiceCreds: main loop: [0] tempService=krbtgt/MIT.KDC.COM@A.EXAMPLE.COM
default etypes for default_tgs_enctypes: 23 16.
>>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
>>> KdcAccessibility: reset
......
....S H O R T E N E D..
......
Found 4 items
drwxrwx--- - ambari-qa hdfs 0 2016-12-19 21:56 /user/ambari-qa
drwxr-xr-x - centos centos 0 2016-11-30 12:07 /user/centos
drwx------ - hdfs hdfs 0 2016-11-29 12:38 /user/hdfs
drwxrwxrwx - j.knulst hdfs 0 2016-12-29 13:40 /user/j.knulst
So directing your Kerberos tickets on Mac OS X to the anticipated ticket cache with the ' -c ' switch will help a lot.
... View more
Labels:
12-26-2016
07:22 PM
@Robert Levas I have this version: $ curl --version
curl 7.51.0 (x86_64-apple-darwin16.0) libcurl/7.51.0 SecureTransport zlib/1.2.8
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz UnixSockets But I think it is not curl, since an earlier test, with a keytab exported from HDP and used locally on my laptop just works: $ kinit -kt ~/Downloads/smokeuser.headless.keytab ambari-qa-socgen_shadow@MIT.KDC.COM
Encryption type arcfour-hmac-md5(23) used for authentication is weak and will be deprecated
$ klist
Credentials cache: API:B0F322D5-5C1F-4EBC-9936-224FF7374B53
Principal: ambari-qa-socgen_shadow@MIT.KDC.COM
Issued Expires Principal
Dec 21 18:39:23 2016 Dec 22 18:39:22 2016 krbtgt/MIT.KDC.COM@MIT.KDC.COM
$ curl -i -v --negotiate -u: sg-hdp24-mst7:50070/webhdfs/v1/?op=LISTSTATUS
* Trying 172.26.233.221...
* TCP_NODELAY set
* Connected to sg-hdp24-mst7 (172.26.233.221) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> Host: sg-hdp24-mst7:50070
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 401 Authentication required
HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
Cache-Control: must-revalidate,no-cache,no-store
< Date: Wed, 21 Dec 2016 17:39:39 GMT
Date: Wed, 21 Dec 2016 17:39:39 GMT
< Pragma: no-cache
Pragma: no-cache
< Date: Wed, 21 Dec 2016 17:39:39 GMT
Date: Wed, 21 Dec 2016 17:39:39 GMT
< Pragma: no-cache
Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
< WWW-Authenticate: Negotiate
WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
* Ignoring the response-body
* Curl_http_done: called premature == 0
* Connection #0 to host sg-hdp24-mst7 left intact
* Issue another request to this URL: 'http://sg-hdp24-mst7:50070/webhdfs/v1/?op=LISTSTATUS'
* Found bundle for host sg-hdp24-mst7: 0x7ff4a8f01430 [can pipeline]
* Re-using existing connection! (#0) with host sg-hdp24-mst7
* Connected to sg-hdp24-mst7 (172.26.233.221) port 50070 (#0)
* Server auth using Negotiate with user ''
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> Host: sg-hdp24-mst7:50070
> Authorization: Negotiate YIIDhwYGKwYBBQUCoIIDezCCA3egFTATBgkqhkiG9xIBAgIGBiqFcCsOA6KCA1wE.........
.......
.....WP6BPmSnLg/JXzr+NpYRnOMvrCtXaFrVPKJ2qtiYc2nOAX1hTsEOnJGkL2WHFdKo6/P7OnRLGzYXLrtWHAeL3IbYNM3moXdJRnf23aItsAhk/r6O7H88eSRHtOzd7HFscaAtlmV8Goh8V2JvQ==
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Cache-Control: no-cache
Cache-Control: no-cache
< Expires: Wed, 21 Dec 2016 17:39:39 GMT
Expires: Wed, 21 Dec 2016 17:39:39 GMT
< Date: Wed, 21 Dec 2016 17:39:39 GMT
Date: Wed, 21 Dec 2016 17:39:39 GMT
< Pragma: no-cache
Pragma: no-cache
< Expires: Wed, 21 Dec 2016 17:39:39 GMT
Expires: Wed, 21 Dec 2016 17:39:39 GMT
< Date: Wed, 21 Dec 2016 17:39:39 GMT
Date: Wed, 21 Dec 2016 17:39:39 GMT
< Pragma: no-cache
Pragma: no-cache
< Content-Type: application/json
Content-Type: application/json
< WWW-Authenticate: Negotiate oYH1MIHyoAMKAQChCwYJKoZIhvcSAQICom4EbGBqBgkqhkiG9xIBAgICAG9bMFmgAwIBBaEDAgEPok0wS6ADAgESokQEQtoEqw8cPRBs2EiQdAiNPPzx2wLLLBDzLrwUKneExsT/OopV3GrnqmXPxWeF............
............
...............TRLeDF+OwJKZh2k=
< Set-Cookie: hadoop.auth="u=ambari-qa&p=ambari-qa-socgen_shadow@MIT.KDC.COM&t=kerberos&e=1482377979783&s=MCVcBnMN/AWeNEnqrTk/msgmRrA="; Path=/; HttpOnly
Set-Cookie: hadoop.auth="u=ambari-qa&p=ambari-qa-socgen_shadow@MIT.KDC.COM&t=kerberos&e=1482377979783&s=MCVcBnMN/AWeNEnqrTk/msgmRrA="; Path=/; HttpOnly
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16392,"group":"hadoop","length":0,"modificationTime":1480419441661,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1480415609465,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1480415614093,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16395,"group":"hdfs","length":0,"modificationTime":1480415613182,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16397,"group":"hadoop","length":0,"modificationTime":1480415620607,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16606,"group":"hdfs","length":0,"modificationTime":1480428257125,"owner":"hdfs","pathSuffix":"system","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":5,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1482186163272,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1480419487982,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
* Curl_http_done: called premature == 0
* Closing connection 0
Could there be a clue in the warning "Encryption type arcfour-hmac-md5(23) used for authentication is weak and will be deprecated" ?
... View more
12-22-2016
11:06 PM
3 Kudos
Hi, I have set up my Kerberos conf on a MIT KDC kerberized cluster for trusting tickets from another AD domain/REALM with the steps described here. All the steps are implemented. Now almost everything is working as expected: -I can authenticate (to webHDFS) on any HDP node with tickets from both REALMS using 'curl --negotiate' -I can authenticate (to webHDFS) from my own laptop (which is set up like just one of many computers in the AD network/REALM) with a ticket/keytab from the MIT REALM but one vital part is not working: -I can't authenticate to webHDFS from my own laptop having a ticket from AD. This is arguably the thing you want to have working is this setup, allowing anyone from the AD REALM to authenticate to HDP via TCP IP / browser This is when authenticating with a ticket from the AD REALM (outside the cluster) $ kdestroy
$ klist
klist: krb5_cc_get_principal: No credentials cache file found
$ kinit jk@FIELD.HORTONWORKS.COM
jk@FIELD.HORTONWORKS.COM's password:
$ klist
Credentials cache: API:42960B54-7745-4A95-B397-8FDE981283E4
Principal: jk@FIELD.HORTONWORKS.COM
Issued Expires Principal
Dec 22 00:24:01 2016 Dec 22 10:24:01 2016 krbtgt/FIELD.HORTONWORKS.COM@FIELD.HORTONWORKS.COM
$ curl -i -v --negotiate -u: sg-hdp24-mst7.field.hortonworks.com:50070/webhdfs/v1/?op=LISTSTATUS
* Trying 172.26.233.221...
* TCP_NODELAY set
* Connected to sg-hdp24-mst7.field.hortonworks.com (172.26.233.221) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> Host: sg-hdp24-mst7.field.hortonworks.com:50070
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 401 Authentication required
HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
Cache-Control: must-revalidate,no-cache,no-store
< Date: Wed, 21 Dec 2016 23:24:11 GMT
Date: Wed, 21 Dec 2016 23:24:11 GMT
< Pragma: no-cache
Pragma: no-cache
< Date: Wed, 21 Dec 2016 23:24:11 GMT
Date: Wed, 21 Dec 2016 23:24:11 GMT
< Pragma: no-cache
Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
* gss_init_sec_context() failed: An unsupported mechanism was requested. unknown mech-code 0 for mech unknown.
< WWW-Authenticate: Negotiate
WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/. Reason:
<pre> Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host sg-hdp24-mst7.field.hortonworks.com left intact
So the error seems to be "* gss_init_sec_context() failed: An unsupported mechanism was requested. unknown mech-code 0 for mech unknown." I am also using Firefox to test the same and it is configured to take on the SPNEGO ticket negotiation, (worked fine before, so that seems not to be the issue) . The authentication error is the same now, but after the Firefox roundtrip something clearly changed to the ticket cache: $ klist
Credentials cache: API:4E28B24B-FC22-4FF4-A769-840D5B058C25
Principal: jk@FIELD.HORTONWORKS.COM
Issued Expires Principal
Dec 22 23:51:44 2016 Dec 23 09:51:44 2016 krbtgt/FIELD.HORTONWORKS.COM@FIELD.HORTONWORKS.COM
Dec 22 23:53:37 2016 Dec 23 09:51:44 2016 krbtgt/MIT.KDC.COM@FIELD.HORTONWORKS.COM
$
So some cross REALM stuff is actually attempted it seems, but not leading to successful access to webHDFS. Anyone, any clue ? How to debug this?
... View more
Labels:
- Labels:
-
Apache Hadoop