Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

need PowerPoint doc to explain ambari nodes as masters/workers/kafak machines

avatar

hi,

I need to give a simple course about ambari machines - as masters machines in cluster and worker/kafka machines

so I will happy to get simple doc ( prefer power point ) that described the nodes in the amabri cluster and nodes purpose / target include examples and diagram , and relashenship between master to worker/kafka's

the target is to give basic idea to employees that are new in hadoop world

Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi Micheal. I trust your ability to make your own PowerPoint with the following information.

Most importantly, Ambari has nothing to do with Kafka. I strongly suggest you explain Kafka on its own, without ever mentioning Ambari.

Moving on, at a high level-view, there is the Ambari Server (the web UI you login to), and agents (the hosts that you can add services to, manage, and monitor). Ambari has no concept of workers. Ambari Server requires a running relational database of PostgreSQL, MySQL, or Oracle.

Perhaps you should start here, but I will try to continue. https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Design

Ambari uses widgets to display the dashboards and graphs. Services running on external systems are configured by SSH communication to the Ambari agents. Ambari allows you to have a central location to define configuration files for any environment. Hadoop is not required for Ambari to work; while it is commonly used for it, Ambari is fully extendable via what are called "stacks." The HDP stack includes Hadoop, Hive, HBase, Pig, Spark, Ranger, etc.

When you first login to a fresh Ambari server, you have a default login account, and you must define a cluster and add hosts before you can do anything useful with Ambari. It is preferred to use Ambari to setup and manage services itself on new hosts rather than attempting to add existing hosts with pre-installed services to Ambari. For example, you should not attempt to install Hadoop with Puppet/Chef/Ansible, and then add this server to Ambari. You should use those tools to manage the Ambari Agent installation, then continue on with a typical Ambari "Add Host" operation.

The agents communicate with the Ambari Server periodically sending heartbeats to let it know that it is alive, and able to accept requests.

Ambari offers different account access restrictions via its login methods. For example, if you want administrators to change and restart services, as well as read-only users to view overall cluster usage, or access the HDFS file system browser, you can selectively allow these actions.

Ambari also has "Ambari Views," which allow you to extend and expose your own type of "web portal" to any system running in your environment.

Hope this gets you stared, but the Ambari wiki page is a fine resource for more information

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

Hi Micheal. I trust your ability to make your own PowerPoint with the following information.

Most importantly, Ambari has nothing to do with Kafka. I strongly suggest you explain Kafka on its own, without ever mentioning Ambari.

Moving on, at a high level-view, there is the Ambari Server (the web UI you login to), and agents (the hosts that you can add services to, manage, and monitor). Ambari has no concept of workers. Ambari Server requires a running relational database of PostgreSQL, MySQL, or Oracle.

Perhaps you should start here, but I will try to continue. https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Design

Ambari uses widgets to display the dashboards and graphs. Services running on external systems are configured by SSH communication to the Ambari agents. Ambari allows you to have a central location to define configuration files for any environment. Hadoop is not required for Ambari to work; while it is commonly used for it, Ambari is fully extendable via what are called "stacks." The HDP stack includes Hadoop, Hive, HBase, Pig, Spark, Ranger, etc.

When you first login to a fresh Ambari server, you have a default login account, and you must define a cluster and add hosts before you can do anything useful with Ambari. It is preferred to use Ambari to setup and manage services itself on new hosts rather than attempting to add existing hosts with pre-installed services to Ambari. For example, you should not attempt to install Hadoop with Puppet/Chef/Ansible, and then add this server to Ambari. You should use those tools to manage the Ambari Agent installation, then continue on with a typical Ambari "Add Host" operation.

The agents communicate with the Ambari Server periodically sending heartbeats to let it know that it is alive, and able to accept requests.

Ambari offers different account access restrictions via its login methods. For example, if you want administrators to change and restart services, as well as read-only users to view overall cluster usage, or access the HDFS file system browser, you can selectively allow these actions.

Ambari also has "Ambari Views," which allow you to extend and expose your own type of "web portal" to any system running in your environment.

Hope this gets you stared, but the Ambari wiki page is a fine resource for more information

avatar

thank you for the long explanation , regarding the ppt , I just to save the time to create this doc ) , so I oped that somethng in network already done

Michael-Bronson