Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

What services not needed? and can be removed

Solved Go to solution
Highlighted

What services not needed? and can be removed

Explorer

i have a large cluster installed currently it is used only for spark on yarn jobs


what services on the master i can remove\uninstall to free up some ram?


one server to accept spark jobs currently have the following services:

  1. Timeline Service V1.5 / YARN Master
  2. History Server / MapReduce2 Master
  3. Hive Metastore / Hive Master
  4. HiveServer2 / Hive Master
  5. Metrics Collector / Ambari Metrics Master
  6. Grafana / Ambari Metrics Master
  7. NameNode / HDFS Master
  8. ResourceManager / YARN Master
  9. SNameNode / HDFS Master
  10. Spark2 History Server / Spark2 Master
  11. Timeline Service V2.0 Reader / YARN Master
  12. YARN Registry DNS / YARN Master
  13. ZooKeeper Server / ZooKeeper Master
  14. DataNode / HDFS Slave
  15. Metrics Monitor / Ambari Metrics Slave
  16. HDFS Client / HDFS Client
  17. Hive Client / Hive Client
  18. MapReduce2 Client / MapReduce2 Client
  19. Spark2 Client / Spark2 Client
  20. Tez Client / Tez Client
  21. YARN Client / YARN Client
  22. ZooKeeper Client / ZooKeeper Client

and multiple clients with:

  1. Metrics Monitor / Ambari Metrics Slave
  2. NodeManager / YARN Slave
1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: What services not needed? and can be removed

Super Mentor

@Ilia K

Bit correction here.

I just checkecd that Spark2 has a dependency on HIve Service.

# grep -A1 -B1 'HIVE' /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/metainfo.xml
            <dependency>
              <name>HIVE/HIVE_METASTORE</name>
              <scope>cluster</scope>


Hence you should not delete the HIVE service .. but if you are not using Hive then just Stop the "Hive Service" components that way it will not use RAM and put the Hive Service in Maintenance Mode.

View solution in original post

5 REPLIES 5
Highlighted

Re: What services not needed? and can be removed

Super Mentor

@Ilia K

If you are not using Hive Service (HIve Metastore / HiveServer2 / Hive Client) then you can remove it.

Also AMS collector which internally starts an HBase instance to store the cluster & services related metrics data ... So if you are not much interested in Metrics Data then you can also Delete the AMS service (AMS Collector / Grafana / Metrics Monitors).

Highlighted

Re: What services not needed? and can be removed

Explorer

Thanks for the quick response,


Can i delete all hive?

  1. Hive Metastore
  2. HiveServer2
  3. Hive Client


Metrics are useful. i can see some bottlenecks and improve the performance,

Currently still struggling the improve the time it take to allocate the containers each job ( all the time they are the same and it take 30-60 seconds every job to allocate them )


Btw why it is a requirements (in the install stage it was selected cause i needed spark\yarn)

Highlighted

Re: What services not needed? and can be removed

Super Mentor

@Ilia K

Bit correction here.

I just checkecd that Spark2 has a dependency on HIve Service.

# grep -A1 -B1 'HIVE' /var/lib/ambari-server/resources/common-services/SPARK2/2.0.0/metainfo.xml
            <dependency>
              <name>HIVE/HIVE_METASTORE</name>
              <scope>cluster</scope>


Hence you should not delete the HIVE service .. but if you are not using Hive then just Stop the "Hive Service" components that way it will not use RAM and put the Hive Service in Maintenance Mode.

View solution in original post

Highlighted

Re: What services not needed? and can be removed

Explorer

ok, just stopped 3 hive services, still it abit bother me that there are red dots in the ui :),


Btw why there are a dependency on hive?

Highlighted

Re: What services not needed? and can be removed

Super Mentor

@Ilia K

Spark can be used to interact with Hive.

When you install Spark using Ambari, the hive-site.xml file is automatically populated with the Hive metastore location.

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_spark-component-guide/content/spark-conf...

Don't have an account?
Coming from Hortonworks? Activate your account here