Support Questions

Find answers, ask questions, and share your expertise

Troubleshooting missing access logs in Ranger with SolrCloud

avatar
Explorer

I have a kerberos & Ranger-secured cluster managed by Ambari.

I had Ranger set up with access data being written to both HDFS and a 2-node SolrCloud cluster (using local, not HDFS storage for its indexes) and it was working well.

However, the local volume I was writing Solr indexes to was getting full, so I thought it would move the Solr installations to a volume with more space, one node at a time.

  • My installation was in /opt/lucidworks-hdpsearch. /opt is part of the root volume - it has 300 GB
  • /srv is a 1 TB volume

Here's what I did on each SolrCloud node one after the other:

  1. Stop Solr
  2. mv /opt/lucidworks-hdpsearch /srv/lucidworks-hdpsearch
  3. ln -s /srv/lucidworks-hdpsearch /opt/lucidworks-hdpsearch
  4. Start Solr

This appears to have gone smoothly. When I do:

ls -l /opt/lucidworks-hdpsearch/solr/ranger_audit_server/ranger_audits_shard1_replica1/data/index/

I see files in there that have current time stamps so I'm assuming the plugins are sending audit data to SolrCloud correctly.

However, I'm now unable to view access logs in the Ranger web UI. I just get No Access Audit found! where I would expect to see a list of access records.

I'm not seeing anything obviously related to this problem in the logs located in /var/log/ranger/admin/ on the Ranger Admin server or /var/log/solr/ranger_audits/ on the SolrCloud nodes.

What can I do to troubleshoot this problem? For example, can I make the Ranger admin server's logs more verbose via Ambari? Or should I be looking elsewhere?

1 ACCEPTED SOLUTION

avatar
Explorer

After re-installing SolrCloud with the data on the bigger volume from the outset everything is working again.

I'm not sure what the original problem was following moving the indexes. I would expect that the symlink would have worked.

View solution in original post

6 REPLIES 6

avatar
Super Collaborator

Hi @Neal Lamont,

Please check the audit for plugin.

6672-ranger.png

if it not giving updated information please restart reanger services in ambari and recheck audit for plugins

avatar
Explorer

Thanks for the suggestion @subhash parise. I am able to see updated records when I update access policies, so I don't think there is a problem with the plugins.

avatar
Super Collaborator

Are you sure ambari-server updating policy using ranger plugin ??

avatar
Explorer

I'm not exactly sure what you mean, @subhash parise but when I update access policies in Ranger, I see those changes being pushed out to the various plugins in the Audit -> Plugins area (the one you took the screenshot of).

avatar
Explorer

I've reinstalled SolrCloud from scratch.

When I do a query directly against either one of the two SolrCloud replicas I do get results which strongly suggests writes to the SolrCloud cluster are working well.

$ curl "http://solrcloud1.mycluster.example.com:6083/solr/ranger_audits/select?q=*%3A*&wt=json&rows=1&indent=true"
{
  "responseHeader":{
    "status":0,
    "QTime":1,
    "params":{
      "q":"*:*",
      "indent":"true",
      "rows":"1",
      "wt":"json"}},
  "response":{"numFound":2270723,"start":0,"docs":[
      {
        "id":"8c799b43-8f24-4cb1-96e9-d389e79ac211",
        "access":"WRITE",
        "enforcer":"hadoop-acl",
        "repo":"mycluster_hadoop",
        "reqUser":"joesmith",
        "resource":"/bigstore/lake/shorterm/.hive-staging_hive_2016-08-16_20-04-17_799_1422618716531065881-1/-ext-10000/dt=201608071700/part-00129",
        "cliIP":"10.196.185.51",
        "logType":"RangerAudit",
        "result":1,
        "policy":-1,
        "repoType":1,
        "resType":"path",
        "reason":"/bigstore/lake/shorterm/.hive-staging_hive_2016-08-16_20-04-17_799_1422618716531065881-1/-ext-10000/dt=201608071700",
        "evtTime":"2016-08-17T03:13:11.11Z",
        "seq_num":70028864,
        "event_count":1,
        "event_dur_ms":0,
        "_version_":1542886234399965185}]
  }}
$

avatar
Explorer

After re-installing SolrCloud with the data on the bigger volume from the outset everything is working again.

I'm not sure what the original problem was following moving the indexes. I would expect that the symlink would have worked.