Member since
08-15-2016
189
Posts
63
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5683 | 01-02-2018 09:11 AM | |
3025 | 12-04-2017 11:37 AM | |
2155 | 10-03-2017 11:52 AM | |
21587 | 09-20-2017 09:35 PM | |
1620 | 09-12-2017 06:50 PM |
10-08-2016
04:27 PM
@emaxwell Some extra info about the environment: We are trying to go directly against the KDC here. :88 is the default port for that. OpenLdap is not available. So can Knox be configured to do just that, without OpenLDAP as middleware?
... View more
10-08-2016
06:57 AM
@emaxwell we checked the processes and ports listening on the KDC host, but this surely seemed to be the KDC. 389/636 were not there. I think these can be altered to something non default. But will check it again.
... View more
10-07-2016
02:27 PM
1 Kudo
Hi, Trying to get a Know gateway up and running. Know will not connect to local MIT KDC. On checking with the CLI utils (knoxcli.sh system-user-auth-test & user-auth-test ) we get following error : /usr/hdp/2.4.2.0-258/knox/bin> ./knoxcli.sh --d system-user-auth-test --cluster default
org.apache.shiro.authc.AuthenticationException: LDAP naming error while attempting to authenticate user. 10.xxx.xxx.x1:88; socket closed
org.apache.shiro.authc.AuthenticationException: LDAP naming error while attempting to authenticate user.
at org.apache.shiro.realm.ldap.JndiLdapRealm.doGetAuthenticationInfo(JndiLdapRealm.java:303)
at org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm.doGetAuthenticationInfo(KnoxLdapRealm.java:177) at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:180)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:267) at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:270)
at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:256)
at org.apache.hadoop.gateway.util.KnoxCLI$LDAPCommand.authenticateUser(KnoxCLI.java:1037)
at org.apache.hadoop.gateway.util.KnoxCLI$LDAPCommand.testSysBind(KnoxCLI.java:1139)
at org.apache.hadoop.gateway.util.KnoxCLI$LDAPSysBindCommand.execute(KnoxCLI.java:1446)
at org.apache.hadoop.gateway.util.KnoxCLI.run(KnoxCLI.java:138)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.gateway.util.KnoxCLI.main(KnoxCLI.java:1643)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.gateway.launcher.Invoker.invokeMainMethod(Invoker.java:70)
at org.apache.hadoop.gateway.launcher.Invoker.invoke(Invoker.java:39)
at org.apache.hadoop.gateway.launcher.Command.run(Command.java:101)
at org.apache.hadoop.gateway.launcher.Launcher.run(Launcher.java:69)
at org.apache.hadoop.gateway.launcher.Launcher.main(Launcher.java:46)
Caused by: javax.naming.ServiceUnavailableException: 10.xxx.xxx.x1:88; socket closed
at com.sun.jndi.ldap.Connection.readReply(Connection.java:454)
at com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:365)
at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2788)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154)
at org.apache.shiro.realm.ldap.JndiLdapContextFactory.createLdapContext(JndiLdapContextFactory.java:508) at org.apache.shiro.realm.ldap.JndiLdapContextFactory.getLdapContext(JndiLdapContextFactory.java:495) at org.apache.shiro.realm.ldap.JndiLdapRealm.queryForAuthenticationInfo(JndiLdapRealm.java:375)
at org.apache.shiro.realm.ldap.JndiLdapRealm.doGetAuthenticationInfo(JndiLdapRealm.java:295)
... 23 more
Unable to successfully bind to LDAP server with topology credentials. Are your parameters correct?
Socket is definitely open and reachable, since we can get to it with netcat and telnet. Knox system user used can login to kadmin without problems. Any idea's ? topology is below: <topology>
<gateway>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>30</value>
</param>
<param>
<name>main.ldapRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<!-- changes for AD/user sync -->
<param>
<name>main.ldapContextFactory</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
</param>
<!-- main.ldapRealm.contextFactory needs to be placed before other main.ldapRealm.contextFactory* entries -->
<param>
<name>main.ldapRealm.contextFactory</name>
<value>$ldapContextFactory</value>
</param>
<!-- AD url -->
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://xxxxxxxxxxxxx:88</value>
</param>
<!-- system user -->
<param>
<name>main.ldapRealm.contextFactory.systemUsername</name>
<value>CN=admin,DC=HADOOP,DC=COM</value>
</param>
<!-- pass in the password using the alias created earlier -->
<param>
<name>main.ldapRealm.contextFactory.systemPassword</name>
<value>#####</value>
</param>
<!-- <param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>kerberos</value>
</param> -->
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
<!-- AD groups of users to allow -->
<param>
<name>main.ldapRealm.searchBase</name>
<value>DC=HADOOP,DC=COM</value>
</param>
<param>
<name>main.ldapRealm.userObjectClass</name>
<value>person</value>
</param>
<param>
<name>main.ldapRealm.userSearchAttributeName</name>
<value>sAMAccountName</value>
</param>
<!-- changes needed for group sync-->
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>DC=HADOOP,DC=COM</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.groupIdAttribute</name>
<value>cn</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
<provider>
<role>authorization</role>
<name>XASecurePDPKnox</name>
<enabled>true</enabled>
</provider>
</gateway>
<service>
<role>NAMENODE</role>
<url>hdfs://{{namenode_host}}:{{namenode_rpc_port}}</url>
</service>
<service>
<role>JOBTRACKER</role>
<url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://{{namenode_host}}:{{namenode_http_port}}/webhdfs</url>
</service>
<service>
<role>WEBHCAT</role>
<url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>
</service>
<service>
<role>OOZIE</role>
<url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>
</service>
<service>
<role>WEBHBASE</role>
<url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>
</service>
<service>
<role>HIVE</role>
<url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>
</service>
<service>
<role>RESOURCEMANAGER</role>
<url>http://{{rm_host}}:{{rm_port}}/ws</url>
</service>
<service>
<role>YARNUI</role>
<url>http://{{rm_host}}:{{rm_port}}</url>
</service>
</topology>
... View more
Labels:
- Labels:
-
Apache Knox
09-28-2016
12:52 PM
That was it.I still had to apply the following to make it work for YARN as well: First generate a secret key and push it to all nodes. Instructions here Then add to custom core-site.xml: hadoop.http.authentication.simple.anonymous.allowed=false
hadoop.http.authentication.signature.secret.file=/etc/security/http_secret
hadoop.http.authentication.type=kerberos
hadoop.http.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
hadoop.http.authentication.kerberos.principal=HTTP/_HOST@LAB.HORTONWORKS.NET
hadoop.http.authentication.cookie.domain=lab.hortonworks.net
hadoop.http.filter.initializers=org.apache.hadoop.security.AuthenticationFilterInitializer
Restart ambari-server
... View more
09-27-2016
11:12 PM
1 Kudo
Hi, I can access webHDFS from cli just fine: [root@sandbox ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root@SANDBOX.HORTONWORKS.COM
Valid starting Expires Service principal
09/28/16 00:25:33 09/28/16 10:25:36 krbtgt/SANDBOX.HORTONWORKS.COM@SANDBOX.HORTONWORKS.COM
renew until 10/05/16 00:25:33
09/28/16 00:25:40 09/28/16 10:25:36 HTTP/sandbox.hortonworks.com@SANDBOX.HORTONWORKS.COM
renew until 10/05/16 00:25:33
[root@sandbox ~]# curl -s -i --negotiate -u:anyUser http://sandbox.hortonworks.com:50070/webhdfs/v1/?op=LISTSTATUS
HTTP/1.1 401 Authentication required
Cache-Control: must-revalidate,no-cache,no-store
Date: Tue, 27 Sep 2016 23:07:01 GMT
Pragma: no-cache
Date: Tue, 27 Sep 2016 23:07:01 GMT
Pragma: no-cache
Content-Type: text/html; charset=iso-8859-1
WWW-Authenticate: Negotiate
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Content-Length: 1404
Server: Jetty(6.1.26.hwx)
HTTP/1.1 200 OK
Cache-Control: no-cache
Expires: Tue, 27 Sep 2016 23:07:01 GMT
Date: Tue, 27 Sep 2016 23:07:01 GMT
Pragma: no-cache
Expires: Tue, 27 Sep 2016 23:07:01 GMT
Date: Tue, 27 Sep 2016 23:07:01 GMT
Pragma: no-cache
Content-Type: application/json
Set-Cookie: hadoop.auth="u=root&p=root@SANDBOX.HORTONWORKS.COM&t=kerberos&e=1475053621856&s=OmhtWeWb8vfQ2n1eb9GhlOTq/CA="; Path=/; HttpOnly
Transfer-Encoding: chunked
Server: Jetty(6.1.26.hwx)
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16396,"group":"hadoop","length":0,"modificationTime":1472134778352,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":4,"fileId":16392,"group":"hdfs","length":0,"modificationTime":1457965550121,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1457965143118,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17246,"group":"hdfs","length":0,"modificationTime":1457967047371,"owner":"hdfs","pathSuffix":"demo","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16403,"group":"hdfs","length":0,"modificationTime":1457965151394,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1457965149964,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16401,"group":"hadoop","length":0,"modificationTime":1457965161645,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":17161,"group":"hdfs","length":0,"modificationTime":1457966562806,"owner":"hdfs","pathSuffix":"ranger","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16437,"group":"hadoop","length":0,"modificationTime":1474960367134,"owner":"spark","pathSuffix":"spark-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":8,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1472158956829,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":9,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1457966006266,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
But when I try the same for YARN webUI or REST API it fails: [root@sandbox ~]# curl -s -ikv --negotiate -u:anyUser -X GET http://sandbox.hortonworks.com:8088/ws/v1/cluster/apps
* About to connect() to sandbox.hortonworks.com port 8088 (#0)
* Trying 10.0.3.15... connected
* Connected to sandbox.hortonworks.com (10.0.3.15) port 8088 (#0)
> GET /ws/v1/cluster/apps HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: sandbox.hortonworks.com:8088
> Accept: */*
>
< HTTP/1.1 401 Authentication required
HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
Cache-Control: must-revalidate,no-cache,no-store
< Date: Tue, 27 Sep 2016 23:08:45 GMT
Date: Tue, 27 Sep 2016 23:08:45 GMT
< Pragma: no-cache
Pragma: no-cache
< Date: Tue, 27 Sep 2016 23:08:45 GMT
Date: Tue, 27 Sep 2016 23:08:45 GMT
< Pragma: no-cache
Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
< WWW-Authenticate: PseudoAuth
WWW-Authenticate: PseudoAuth
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1411
Content-Length: 1411
< Server: Jetty(6.1.26.hwx)
Server: Jetty(6.1.26.hwx)
<
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /ws/v1/cluster/apps. Reason:
<pre> Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
* Connection #0 to host sandbox.hortonworks.com left intact
* Closing connection #0
What is the difference with these 2 calls ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
09-25-2016
02:36 PM
OK got it now, The restriction was on the Docker host service level. Just shift the Memory slider and then you should be fine.
... View more
09-25-2016
02:32 PM
Here is some extra env info: jknulst$ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 10
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.20-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: NWBP:4ERH:CUCP:IF5Y:CY23:M2EQ:O7L7:BBPN:A5IA:HWO7:7T3A:OHFP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 23
Goroutines: 39
System Time: 2016-09-25T14:27:37.828975604Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8 Please note the 'Total Memory: 1.952 GiB' this tells me the limit is somewhere on the Docker level, not the container level.
... View more
09-23-2016
11:07 AM
1 Kudo
This statement misses the 'LOCATION' clause so it is not an external table
... View more
09-23-2016
10:45 AM
@Ancil McBarnett Are you sure AVRO backed tables can be created as external tables? If I run your statement I get problems on the LOCATION predicate. Hive does not expect the LOCATION clause it seems Edit: Never mind, you can but the order of the statements matters: This works: CREATE EXTERNAL TABLE as_avro
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/user/root/as_avro'
TBLPROPERTIES ('avro.schema.url'='hdfs:///user/root/avro.avsc');
... View more
09-23-2016
09:39 AM
From the docker docs I get that docker containers must actually be limited to not take all mem available on the host OS. This is confusing.
... View more