We are currently facing an issue with Ambari Server which causes performance issues, and ends with a JVM crash systematically. Our current production cluster is composed of ten nodes, including most services provided by the Hortonworks Hadoop stack. Performance alert are related to Ambari Server REST API.
We can easilly reproduce it by just creating activities on the web UI by spamming a little bit the interface (manually, with one or two users). Logs display timeout error which after a certain amount of time ends up with a Java OOM. After investigating here is what we found so far :
We use a PostgresSQL database, which in it actual state is still responsible and reactive, we checked some tables such as alert_history (which are approximativly 20k rows) but nothing suspicious. We checked pg_stat_statements table and it appears that there is no slow query at the moment (the higher we could observed only has a 1 seconds average runtime, and even not related to ambari's table).
We have made 6 thread dumps and one heap dumps after generating activity on UI to make it crash. Following details was detected :
I am currently checking Ambari Server source code through it github repository, matching with thread stack trace using one of the heavy memory consumer thread mentionned earlier as reference :
This problem is critical as we need to restart ambari server quite often which prevents for efficency during operations. I am still looking for the root cause but i would gladly appreciate some hints about where to look at :)
Can you please fine tune the Ambari by following https://community.hortonworks.com/articles/80635/optimize-ambari-performance-for-large-clusters.html
Also you did not mentioned which ambari version ru using. If you think there is larger historical data then you can attempt to purge by following https://docs.hortonworks.com/HDPDocuments/Ambari-22.214.171.124/bk_ambari-administration/content/purging-am...
Let me know if any questions.