Member since
09-20-2017
50
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2251 | 08-09-2018 06:47 PM | |
6457 | 01-05-2018 02:34 PM | |
1662 | 12-05-2017 02:29 PM | |
987 | 10-18-2017 06:10 PM |
07-05-2023
09:03 PM
It works for me
... View more
02-03-2022
01:09 AM
@prashanthshetty as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
08-09-2018
07:08 PM
I used ambari to uninstall and reinstall those services.
... View more
08-09-2018
04:57 PM
Enabling kerberos authentication for the ambari resolved the issue. Thank you @Robert Levas
... View more
06-15-2018
07:16 PM
@Sudheer Velagapudi What version of HDF/NiFi are you using? As @Shu mentioned, this is a known issue with HDF 3.1.1. The reason is the Kerberos ticket is not auto-generated once it is expired. Due to which, the connection to Hive is not established and your query doesn't run. I would recommend upgrading to HDF 3.1.2 to fix this issue, provided you are on a lower HDF version and this is the reason for your problem. Please have a look at this document for further details on HDF 3.1.2 release and the issues that it is addressing.
... View more
04-26-2018
01:31 PM
This is what I gfound in the Ranger admin access log: [26/Apr/2018:13:28:36 +0000] "GET /service/plugins/secure/policies/download/HDPCLUSTER_hbase?lastKnownVersion=172&lastActivationTime=1524514591769&pluginId=hbaseRegional@hadoop.cluster.com-HDPCLUSTER_hbase&clusterName=HDPCLUSTER HTTP/1.1" 401 - "-" "Java/1.8.0_161"
... View more
01-09-2018
04:49 AM
@Sudheer Velagapudi Try setting hive.server2.logging.operation.level=EXECUTION; Below are some other values for the same parameter:
NONE: Ignore any logging. EXECUTION: Log completion of tasks. PERFORMANCE: Execution + Performance logs. VERBOSE: All logs. Hope this helps you.
... View more
12-12-2017
10:30 PM
hi @Sudheer Velagapudi, To update the policy you need to specify the policy ID(At the end of the URL) where as in creation time, it automatically increment the policy Id. ex: http://hostname:6080/service/public/api/policy/{id} Hope this helps!!
... View more
12-01-2017
05:17 PM
1 Kudo
This was expanded to a blog post with additional details: https://risdenk.github.io/2017/12/18/ambari-infra-solr-ranger.html There are two things I would check: 1) Check the field caches by navigating to Ambari Infra Solr UI - http://HOSTNAME:8886/solr/#/ranger_audits_shard1_replica1/plugins/cache?entry=fieldCache,fieldValueCache 2) If you were to take a heap dump, you could open the heap dump in Eclipse Memory Analyzer and check this biggest heap offender. My assumption is that the majority of the heap is being used by uninverting the _version_ field since it is being used for sorting or other things instead of indexing. This isn't a misuse by your part but a problem with how Ranger is using Solr. I was able to fix this by turning on DocValues for the _version_ field. I am currently working on opening a Ranger JIRA and posting to the Ranger user list to try to address this. The change to DocValues will require you to delete the collection and recreate. We have been running with 4GB heap with no issues. Previously we needed a heap of almost 20GB to handle the uninverted cached values (this grew overtime).
... View more
12-05-2017
02:29 PM
Run the below command to manually replicate the ranger_audits to other solr instance. curl -i -k -v --negotiate -u : http://<from-node>:8886/solr/admin/collections?action=ADDREPLICA&collection=ranger_audits&shard=shard1&node=<target-host>:8886_solr
... View more