Member since
04-29-2016
5
Posts
1
Kudos Received
0
Solutions
10-22-2019
07:50 PM
Hi @Jonas Straub,do as your article ,i create collection by curl command,and got the 401 error: curl –negotiate –u : ‘http://myhost:8983/solr/admin/collections?action=CREATE&name=col&numShards=1&replicationFactor=1&collection.configName=_default&wt=json’ { “responseHeader”:{ “status”:0, “QTime”:31818}, “failure”:{ “myhost:8983_solr”:”org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://myhost:8983/solr:Excepted mime type application/octet-stream but got text/html. <html> <head> <meta http-equiv=\”Content-Type\” content=\”text/html;charset=utf-8\”/>” <title> Error 401 Authentication required </title> </head> <body> <h2>HTTP ERROR 401</h2> <p> Problem accessing /solr/admin/cores.Reason: <pre> Authentication required</pre> </p> </body> </html> } } When I debug the solr source code, found this exception is returned by “coreContainer.getZKController().getOverseerCollectionQueue().offer(Utils.toJson(m), timeout)”,so I doubt maybe the solr don’t authenticate zookeeper info and I use a no-kerberos zookeeper to replace the Kerberos zookeeper, solr collection can be created successfully. How to solve the problem with Kerberos ZK?
... View more
06-13-2016
06:38 PM
@Tom Ellis, you mentioned finding the SaslRpcClient class. That's a very important piece. This is the class that handles SASL authentication for any client-server interaction that uses Hadoop's common RPC framework. The core Hadoop daemons in HDFS and YARN, such as NameNode and ResourceManager, make use of this RPC framework. Many other services throughout the Hadoop ecosystem also use this RPC framework. Clients of those servers will use the SaslRpcClient class as the entry point for SASL negotiation. This is typically performed on connection establishment to a server, such as the first time a Hadoop process attempts an RPC to the NameNode or the ResourceManager. The exact service to use is negotiated between client and server at the beginning of the connection establishment, during the negotiation code that you mentioned finding. The service value will be different per Hadoop daemon, driven by the shortened principal name, e.g. "nn". However, you won't find anything in the Hadoop source code that explicitly references the TGS. Instead, the Hadoop code delegates to the GSS API provided by the JDK for the low-level implementation of the Kerberos protocol, including handling of the TGS. If you're interested in digging into that, the code is visible in the OpenJDK project. Here is a link to the relevant Java package in the OpenJDK 7 tree: http://hg.openjdk.java.net/jdk7u/jdk7u/jdk/file/f51368baecd9/src/share/classes/sun/security/jgss/krb5 Some of the most relevant classes there would be Krb5InitCredential and Krb5Context.
... View more