What component are you asking about? What are you trying to achieve?
They typically call each other over combinations of separate protocols.
- HDFS and YARN interact via RPC/IPC.
- Ambari Server and Agents are over HTTP & REST. Ambari also needs JDBC connections to the backing database.
- Hive, Hbase, and Spark can use Thrift Server. The Hive metastore uses JDBC.
- Kafka has its own TCP protocol.
I would suggest starting on a specific component for the use case(s) you want. Hadoop itself is only comprised of HDFS & YARN + MapReduce