Member since
06-09-2016
125
Posts
9
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5526 | 07-21-2017 11:02 PM | |
9914 | 11-28-2016 07:59 PM |
09-06-2017
06:29 PM
11 Kudos
@Sanaz Janbakhsh, Check maximum-applications and maximum-am-resource-percent properties in your cluster. Try increasing values for below properties to allow more applications to be running at a time. yarn.scheduler.capacity.maximum-applications / yarn.scheduler.capacity.<queue-path>.maximum-applications Maximum number of applications in the system which can be concurrently active both running and pending. Limits on each queue are directly proportional to their queue capacities and user limits. This is a hard limit and any applications submitted when this limit is reached will be rejected. Default is 10000. This can be set for all queues with yarn.scheduler.capacity.maximum-applications and can also be overridden on a per queue basis by setting yarn.scheduler.capacity.<queue-path>.maximum-applications. Integer value expected. yarn.scheduler.capacity.maximum-am-resource-percent / yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent Maximum percent of resources in the cluster which can be used to run application masters - controls number of concurrent active applications. Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with yarn.scheduler.capacity.maximum-am-resource-percent and can also be overridden on a per queue basis by setting yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
... View more
11-25-2017
05:50 PM
Hi vsuvagia, I checked the logs. it is successful call but still there is not data avaialable in the audit tab- access log. [25/Nov/2017:10:46:27 -0700] "GET /service/assets/accessAudit?XXXXX HTTP/1.1" 200 119 I enabled the Amabari-infra service for the ranger. Please find the screenshot. SJ.untitled.png
... View more
09-06-2017
06:21 PM
1- You can always click on the red-pen edit button and manually enter the exact value you want. 2- You can uncheck and ignore the changes recommended by Ambari or accept the changes. To view mapred, tez and hive config properties, make sure you are in the corresponding services tab.
... View more
05-07-2018
01:30 PM
My custom processor is pretty easy to customize. https://github.com/tspannhw/nifi-extracttext-processor You can tweak it to extract just somethings, Apache Tika is very powerful.
... View more
07-24-2017
08:20 PM
never mind. It took abit . Now it is working. Thanks for the help
... View more
11-23-2017
04:14 PM
Thanks adding the nodes to all - nifi-resource policy fixed the same problem for me
... View more
07-17-2017
11:34 PM
By default, processor runs same SQL every 0 seconds. Please set Scheduling > Run Schedule to a higher value and make sure that SQL is created in such a way that upon execution it picks only new/updated records else you will see "already processed" data. Also, if NiFi is clustered then please set Scheduling > Execution to Primary node, else all nodes will run same query and you will see each record being processed "n" times where "n" = No. of nodes in Cluster.
... View more
07-07-2017
05:34 PM
@Sanaz Janbakhsh Good to hear, can you mark the original answer I posted as accepted to close out this thread? Thanks, Matt
... View more
05-02-2017
10:43 PM
If you mean Solr on HDFS, the answer is "it depends." If you have a high number of frequent updates to your index, I usually recommend local storage. On the other hand, if your updates are more batch, and not a constant stream, then using HDFS is a convenient option. If you mean installing Solr on HDF, the only supported option and use case is installing Ambari Infra. The Ambari Infra component is Solr under the covers, but it is only supported for use with HDP and HDF components such as Ranger for User Audit records. There's no support to use Ambari Infra for indexing your own data.
... View more
05-11-2017
07:03 PM
Thanks for benefit link matt!
... View more