Member since
06-30-2017
8
Posts
0
Kudos Received
0
Solutions
02-06-2018
09:53 AM
Hi @Robert Levas We added the rule : RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*// And now I am able to successfully work on the terminal : $ hadoop org.apache.hadoop.security.HadoopKerberosName user@EXAMPLE.COM
Name: user@EXAMPLE.COM to user But I still can't access my Solr UI. When I go to the UI, I get a pop-up asking me for authentication, I type my username and password and I still get : HTTP ERROR 500
Problem accessing /solr/. Reason:
Server Error
Caused by:
org.apache.solr.common.SolrException: Error during request authentication,
at org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:319)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user@EXAMPLE.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:378)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:348)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
at org.apache.solr.security.KerberosFilter.doFilter(KerberosFilter.java:46)
at org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:144)
at org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:311)
... 22 more
Caused by:
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user@EXAMPLE.COM
... View more
01-29-2018
06:15 AM
Thanks for your help, I will try this today and let you know asap if this has solved the issue.
... View more
01-23-2018
02:33 PM
I don't know about the rules for Solr. The thing is that there was no rules for solr befor I added the two mentioned above. The result of your command is : $ hadoop org.apache.hadoop.security.HadoopKerberosName user@EXAMPLE.COM
18/01/23 15:28:10 INFO util.KerberosName: No auth_to_local rules applied to user@EXAMPLE.COM
Name: user@EXAMPLE.COM to user@EXAMPLE.COM I also tried : $ hadoop org.apache.hadoop.security.HadoopKerberosName user@MY.PROD.EXAMPLE.COM
Name: user@MY.PROD.EXAMPLE.COM to user
... View more
01-23-2018
06:36 AM
RULE:[1:$1@$0](ambari-qa-cluster1@MY.PROD.EXAMPLE.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hbase-cluster1@MY.PROD.EXAMPLE.COM)s/.*/hbase/
RULE:[1:$1@$0](hdfs-cluster1@MY.PROD.EXAMPLE.COM)s/.*/hdfs/
RULE:[1:$1@$0](spark-cluster1@MY.PROD.EXAMPLE.COM)s/.*/spark/
RULE:[1:$1@$0](zeppelin-cluster1@MY.PROD.EXAMPLE.COM)s/.*/zeppelin/
RULE:[1:$1@$0](.*@MY.PROD.EXAMPLE.COM)s/@.*//
RULE:[2:$1@$0](amshbase@MY.PROD.EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@MY.PROD.EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](atlas@MY.PROD.EXAMPLE.COM)s/.*/atlas/
RULE:[2:$1@$0](dn@MY.PROD.EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](falcon@MY.PROD.EXAMPLE.COM)s/.*/falcon/
RULE:[2:$1@$0](hbase@MY.PROD.EXAMPLE.COM)s/.*/hbase/
RULE:[2:$1@$0](hive@MY.PROD.EXAMPLE.COM)s/.*/hive/
RULE:[2:$1@$0](jhs@MY.PROD.EXAMPLE.COM)s/.*/mapred/
RULE:[2:$1@$0](jn@MY.PROD.EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](knox@MY.PROD.EXAMPLE.COM)s/.*/knox/
RULE:[2:$1@$0](livy@MY.PROD.EXAMPLE.COM)s/.*/livy/
RULE:[2:$1@$0](nfs@MY.PROD.EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](nm@MY.PROD.EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@MY.PROD.EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](oozie@MY.PROD.EXAMPLE.COM)s/.*/oozie/
RULE:[2:$1@$0](rangeradmin@MY.PROD.EXAMPLE.COM)s/.*/ranger/
RULE:[2:$1@$0](rangertagsync@MY.PROD.EXAMPLE.COM)s/.*/rangertagsync/
RULE:[2:$1@$0](rangerusersync@MY.PROD.EXAMPLE.COM)s/.*/rangerusersync/
RULE:[2:$1@$0](rm@MY.PROD.EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@MY.PROD.EXAMPLE.COM)s/.*/yarn/
RULE:[1:$1@$0](infra-solr@MY.PROD.EXAMPLE.COM)s/.*/solr/
RULE:[2:$1@$0](infra-solr@MY.PROD.EXAMPLE.COM)s/.*/solr/
DEFAULT I added the last two before DEFAULT (that was already there). It is still not working. The rule you mentionned is already there. Please note that my user name is username@EXAMPLE.COM whereas all principals name are @MY.PROD.EXAMPLE.COM When I look into /etc/ambari-infra-solr/conf/security.json, I get : {
"authentication": {
"class": "org.apache.solr.security.KerberosPlugin"
},
"authorization": {
"class": "org.apache.ambari.infra.security.InfraRuleBasedAuthorizationPlugin",
"user-role": {
"infra-solr@MY.PROD.EXAMPLE.COM": "admin",
"logsearch@MY.PROD.EXAMPLE.COM": ["logsearch_user", "ranger_admin_user", "dev"],
"logfeeder@MY.PROD.EXAMPLE.COM": ["logfeeder_user", "dev"],
"atlas@MY.PROD.EXAMPLE.COM": ["atlas_user", "ranger_audit_user", "dev"],
"nn@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"hbase@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"hive@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"knox@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"kafka@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"rangerkms@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"storm-bdtest1@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"rm@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"nifi@MY.PROD.EXAMPLE.COM": ["ranger_audit_user", "dev"],
"rangeradmin@MY.PROD.EXAMPLE.COM": ["ranger_admin_user", "ranger_audit_user", "dev"]
},
"permissions": [
{
"name" : "collection-admin-read",
"role" :null
},
{
"name" : "collection-admin-edit",
"role" : ["admin", "logsearch_user", "logfeeder_user", "atlas_user", "ranger_admin_user"]
},
{
"name":"read",
"role": "dev"
},
{
"collection": ["hadoop_logs", "audit_logs", "history"],
"role": ["admin", "logsearch_user", "logfeeder_user"],
"name": "logsearch-manager",
"path": "/*"
},
{
"collection": ["vertex_index", "edge_index", "fulltext_index"],
"role": ["admin", "atlas_user"],
"name": "atlas-manager",
"path": "/*"
},
{
"collection": "ranger_audits",
"role": ["admin", "ranger_admin_user", "ranger_audit_user"],
"name": "ranger-manager",
"path": "/*"
}]
}
}
... View more
01-22-2018
05:23 PM
Hi @Shyam Sunder Rai and @Robert Levas , thanks for the answer ! I was also thinking it is a problem related to auth_to_local. That's why I added a new rule for solr : RULE:[2:$1@$0](infra-solr@EXAMPLE.COM)s/.*/solr/ and restarted. But nothing changes, I still got the 500 error. How to be sure of the principal and regex to use in the rule ? I tried to find an example for Solr rule but nothing on the Internet 😮
... View more
01-22-2018
02:55 PM
Hi,
Since I kerberized my cluster I'm unable to access Solr UI.
When I go to Solr UI I get a "HTTP ERROR 500"
(For confidentiality reason I replaced username / DOMAIN / COM but imagine it exactly as joe@EXAMPLE.COM)
Problem accessing /solr/. Reason : Server Error
Caused by:
org.apache.solr.common.SolrException: Error during request authentication
[...]
Caused by
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to <username>@<DOMAIN>.<COM> Any hint ? Thanks.
... View more
Labels:
12-12-2017
03:48 PM
I want to use Apache Falcon to clean up my cluster regarding some basic rules of retention (like "I want to delete all the data older than XX days in this HDFS path").
I have been able to create my cluster and feeds : I created 1 feed for each path concerned by a retention policy.
As a result I can see Oozie running the jobs regulary in the cluster, but the retention policy seems not to be applied as it doesn't delete anything in the given path (feeds).
However, jobs are ending with a "SUCCEEDED" status which means everythings worked well...
Anyone having experienced this kind of issue ? I don't know what's wrong, maybe I did not understand well the purpose of "retention" in Apache Falcon.
Attached is one of the feed example : <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<feed name="TEST-1H" version="0" xmlns="uri:falcon:feed:0.1">
<availabilityFlag>_success</availabilityFlag>
<frequency>minutes(20)</frequency>
<timezone>UTC</timezone>
<late-arrival cut-off="minutes(12)"/>
<clusters>
<cluster name="bdtest" type="source" version="0">
<validity start="2017-11-24T15:17Z" end="2099-12-31T11:59Z"/>
<retention limit="hours(1)" action="delete"/>
<locations>
<location type="data" path="/test/${YEAR}/${MONTH}/${DAY}"/>
<location type="stats" path="/"/>
</locations>
</cluster>
</clusters>
<locations>
<location type="data" path="/test/${YEAR}/${MONTH}/${DAY}"/>
<location type="stats" path="/"/>
</locations>
<ACL owner="admin" group="users" permission="0x755"/>
<schema location="/none" provider="/none"/>
<properties>
<property name="queueName" value="default"/>
<property name="jobPriority" value="NORMAL"/>
</properties>
</feed>
Thanks.
... View more
Labels:
06-30-2017
01:33 PM
Hi,
I'm using the sandbox-hdp docker image to process data locally on my Macbook before going to the "real cluster".
I've been able to process multiple datas, push and read in hdfs; but now I want to join all this data (around 6-10 Gb, it's not that big...) ; and during this process, it seems that I go out of space on the dfs disk space.
The fact is that the spark-submit goes well until a given moment. The stacktrace is as follow : 17/06/29 16:21:26 INFO TaskSetManager: Finished task 1.0 in stage 20.0 (TID 5439) in 4595 ms on localhost (3/4)
17/06/29 16:21:28 INFO Executor: Finished task 2.0 in stage 20.0 (TID 5440). 2655 bytes result sent to driver
17/06/29 16:21:28 INFO TaskSetManager: Finished task 2.0 in stage 20.0 (TID 5440) in 7111 ms on localhost (4/4)
17/06/29 16:21:28 INFO TaskSchedulerImpl: Removed TaskSet 20.0, whose tasks have all completed, from pool
17/06/29 16:21:28 INFO DAGScheduler: ShuffleMapStage 20 (saveAsTable at NativeMethodAccessorImpl.java:-2) finished in 278.769 s
17/06/29 16:21:28 INFO DAGScheduler: looking for newly runnable stages
17/06/29 16:21:28 INFO DAGScheduler: running: Set(ShuffleMapStage 15)
17/06/29 16:21:28 INFO DAGScheduler: waiting: Set(ShuffleMapStage 19, ShuffleMapStage 17, ShuffleMapStage 21, ResultStage 22)
17/06/29 16:21:28 INFO DAGScheduler: failed: Set()
17/06/29 16:30:44 WARN LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_1038014536_15] for 30 seconds. Will retry shortly ...
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot renew lease for DFSClient_NONMAPREDUCE_1038014536_15. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
It happens during the spark job (that is aiming at writing the resulting (joined) dataframe into hdfs or into Hive (both leads to same error).
During the job, at the moment i can read "DAGScheduler: failed: Set()", the HDFS Disk usage goes gradually from 73% (start value) to 100% ... And I then get the error mentionned above.
I have plenty of space on my laptop and even on the cluster :
Configured Capacity:
61.74 GB
DFS Used:
20.43 GB (33.09%)
Non DFS Used:
20.92 GB
DFS Remaining:
16.94 GB (27.44%)
Block Pool Used:
20.43 GB (33.09%)
Any ideas to solve this issue ? I just want to increase my Configured Capacity in the container !
Thanks !
... View more