Communication in a Microservice Architecture

Yuval Hazaz
Yuval Hazaz
Apr 6, 2023
Communication in a Microservice ArchitectureCommunication in a Microservice Architecture

What Is Microservices Architecture?

Microservices architecture is a software system architecture that is designed as a collection of loosely coupled independent services. A microservice can be developed, tested, deployed, and scaled without being disrupted by other services.

Developers use microservices architecture to build a distributed and decentralized system. Each service is responsible for its data and logical model in a microservices architecture. It essentially breaks down an application into smaller pieces that communicate anonymously.

Even though microservices architecture is complex, it’s still a viable alternative to monolithic and service-oriented architectures. For more information on Microservices, read our blog an introduction to microservices.

Instantly generate
production-ready backend
Never waste time on repetitive coding again.
Try Now

Advantages of Microservices

High Performance

You can also choose high-performance tools for individual microservices that increase the entire microservices architecture’s performance. You can also add new components to a service without causing downtime or having to redeploy the whole system.

Faster Delivery

Microservices architecture allows for faster release cycles. Unlike monolithic architecture, applications developed as microservices can release new features and upgrade older versions more quickly. Due to the independence of services, developers working on individual microservices aren’t blocked by each other from executing their services.

Team Autonomy

Microservices architecture allows multiple smaller teams to work independently and cross-functionally. It provides a shared-ownership approach where numerous groups of developers work on each service autonomously, with each team responsible for the business logic of their given service. Consequently, each team can develop, test, deploy, and scale their services without being dependent on other teams.

Microservices Scalability

Microservices can scale horizontally and vertically very quickly since they can be implemented using different technologies and deployed to multiple servers. Based on load and processing power, you can autonomously scale a microservice up or down without affecting other services and without inflating the scalability cost.

Why not Amplication?

Good news everyone! If you're looking to dive into microservices architecture and want a great place to start, look no further than Amplication. Amplication is an open-source, easy-to-use development tool that can help you easily create robust and scalable microservices applications. So why wait? Head over to the Amplication GitHub repo, join the fun, and give us a 🌟. Woop woop woop!

Zoidberg suggesting Amplication

Challenges to Building Microservices


While building a microservices architecture, you must ensure each component’s reliability.

For example, in a monolithic architecture, the whole application goes down if there’s a server-related issue. So if there’s a guarantee provided that the system will remain up for 99.5% of the time, then it is assured that the whole system will remain operational for the guaranteed uptime. However, in the case of a microservices architecture, each service has its own uptime assurance. So if a system has ten microservices, each with 99.5% guaranteed uptime, the calculated uptime for the entire system will be 95%, which is way lower than for a monolithic application.

Because of this, developers should introduce fault tolerance to guard an application from potential downtime.

High Availability

You must build a highly available distributed system to provide continuous and sustainable service to your end users. Microservices must be resilient to failures and available during unplanned downtimes to avoid service disruptions.

In highly available microservices, if there’s a failure in one machine, the workload can fail over to another machine without causing any downtime. You can also perform constant monitoring and in-depth tests to prevent failure zones. Or, you can back up or replicate microservices components to multiple server instances to completely eliminate downtime.

So, highly available systems are required to recover easily from unseen failures in the shortest time possible.

Improved Scalability

Scalability is a key characteristic of microservices, but it comes at a cost. You can scale up a microservice by increasing the server capacity, the number of running instances of the application, or both.

While running a monolithic application from a single server, you can handle an increase in load by running new instances of the application to spread the load evenly. However, while running a microservices-based system, we need to manage different components and decide whether all of them or just one needs to be scaled up.


While building a distributed system with microservices, it’s assumed that failures will occur. So microservices should be designed to be resilient to failures and respond to them without any data loss or downtime.

In a distributed system, multiple microservices communicate with each other. One failed microservice must not cause cascading failures to other microservices and bring down the entire system. A microservice needs to handle failures gracefully, recovering the last state before the failure and restarting successfully.

Foundations of Microservices Architecture


While building a microservices architecture, it’s easy to perform horizontal scaling, i.e., increasing service instances on demand, if microservices are stateless.

According to O’Reilly, “Stateless microservices do not maintain any state within the services across calls. They take in a request, process it, and send a response back without persisting with any state information.” On the contrary, stateful microservices store the processing states in some form to use later for processing new requests.

Horizontal Scaling

Horizontal scaling refers to adding or removing machine instances to cope with an increase or decrease in demand. Instead of changing the existing specifications of the machine, we can scale out the load and processing power across multiple machines. In vertical scaling, more computing power is added to the existing machine.

Horizontal scaling is always preferred over vertical scaling because you don’t need to take your system offline to scale it up or down. Instead, you can simply keep it online while scaling by keeping your existing resource pool and adding more computing resources.

Load Balancing

Load Balancing effectively distributes incoming traffic among the available server instances to prevent skewness of load on one or two instances. Load routing is done using various algorithms like Round Robin.

While working with horizontally scalable microservices, multiple client requests need to be evenly routed to multiple server instances on the server pool; this avoids overloading a single server instance.

Strong Consistency

A distributed system is said to be consistent if all the server instances are showing the same data at any point in time. We achieve strong consistency when data is strongly consistent at all times. This can be done by introducing mechanisms like locks, which cause contention overhead, inability to be elastic, resilient, etc.

Traditional monolithic architectures usually implement a strong consistency model because they use a strongly consistent database like SQL.

Eventual Consistency

Eventual consistency is a model used to ensure data consistency and availability in a distributed system. It allows inconsistencies for a short period of time until they are eventually resolved without having to perform a rollback of the whole process. Finally, consistency is achieved by asynchronous or event-driven communication between microservices using message or event brokers.

Version control system tools like Git implement an eventually consistent model since they rely on merge operations later on, to bring things back into alignment.

Techniques of Communication Between Microservices

Synchronous HTTP Calls

Microservices communicate synchronously using HTTP or HTTPS protocols, where the client sends an HTTP request to an HTTP server and waits for its response.


  • It is a straightforward solution.
  • It happens in real-time.


  • The client blocks its thread when it calls the server, only resuming its task when the response from the server arrives.
  • You may have to handle cascading failures.
  • Communicating services are deeply coupled.

Use cases

  • In situations where a response is required from another service
  • When you require real-time responses
  • If you need faster computation and response

Technologies used

  • HTTP
  • gRPC
  • REST
  • GraphQL

Message Queue

Unlike HTTP communication, services do not need to interact with each other directly; instead, the microservices push their messages into a message queue via a message broker to asynchronously communicate with the services.


  • Microservices do not have to wait for the receiving microservice to finish processing the message; they just have to wait for a message broker bus to indicate if the message was successfully sent.
  • A message queue allows communication between microservices regardless of whether the receiving microservice is active or not; it only needs the message broker to be running.
  • Asynchronous communication provides better system performance.
  • The technique offers a redelivering mechanism where a message can be delivered immediately or after some time in case of failure.
  • It provides a dead-letter mechanism where the routing of undelivered messages is supported.


  • Queues don’t ensure message delivery, as messages are stored only during event processing.
  • It makes the system more complex since you need to install, configure, and maintain the message broker.

Use cases

  • For long-running and resource-consuming tasks
  • When the result of the task is not immediately required

Technologies used

  • ActiveMQ
  • RabbitMQ
  • Apache Kafka
  • Cloud-based-messaging mechanisms like AWS Kinesis or AWS SQS

Publish/Subscribe Messaging

Pub/Sub messaging is an asynchronous form of communication where a microservice broadcasts a message asynchronously across multiple other services. In this method, a microservice publishes a message to a topic immediately received by all other microservices subscribed to that topic.


  • Publish/Subscribe messaging provides real-time communication because it delivers messages instantaneously to subscribers.
  • It provides increased scalability, as there is no pre-defined number of publishers and subscribers; they can be added or removed anytime, according to usage, without affecting the whole system.
  • The technique helps build more loosely coupled systems since publishers and subscribers are decoupled, and there are no data transfer and delivery dependencies.


  • Since there is no communication between publishers and subscribers, failed message deliveries cannot be identified; publishers do not receive any acknowledgment or reply once the messages are delivered.
  • Message ordering isn’t guaranteed when consumers receive messages.

Use cases

  • When real-time computing is needed, like in automation, data centers, networking, etc.
  • For distributed cloud-based systems and serverless computing like Amazon Web Services (AWS) or Google Cloud Platform (GCP)
  • In IoT technologies since you can publish messages and push alerts to the connected IoT devices in real-time

Technologies used

  • Apache Kafka
  • ActiveMQ
  • Redis

Event Streaming

Event streaming is a mode of communication used in serverless or microservices architecture that uses event streams and messages for seamless communication between microservices.

An event stream is a sequence of events that contains information about state changes; it’s a stream of immutable data in a pub/sub design pattern that is durable and persistent.


  • Event streaming provides real-time interactivity for better data awareness.
  • It uses immutable events and message ordering, which provide concurrency in distributed systems.
  • Streams provide loose coupling between services; services producing the events don’t need to know how the events are consumed, and services consuming the events don’t need to know how the events are produced.
  • Users also get fault tolerance and resiliency because, if a consumer fails, the system will keep working as messages get queued in the broker; the consumer can resume consuming events once it recovers from failure.


  • Event streaming results in increased system complexity, requiring a queueing system with producers and consumers at both ends.
  • It leads to increased costs, as aside from business logic, the system must read and validate events.
  • Monitoring a distributed and highly decoupled system is challenging since each service is independent; if any service passes incorrect data, it can be difficult to identify.

Use cases

  • Where different teams are operating and deploying in various regions and/or accounts since event routers allow them to transfer data between systems
  • To monitor and receive notifications on any changes or updates instead of continuously monitoring resources.

Technologies used

  • Apache Kafka


Choosing the right mode of communication is an important consideration when building microservices. There are many factors to consider while designing how your services will communicate, like how to provide fault tolerance, scalability, and resiliency and keep the system highly available.

In this article, we provided detailed information on the major communication modes. Every system is different, and your chosen communication model will be based on your unique use cases.