Tuesday, January 31, 2023

Microservices Design Patterns

8 Core Design Patterns for Microservices Architecture

  • Service Discovery:
    • A service registry that helps services locate and communicate with each other.
  • Load Balancer:
    •  A component that distributes incoming requests to the appropriate service instances.
  • API Gateway:
    • An entry point for client requests that routes requests to the appropriate microservices, performs authentication and authorization, and handles other tasks such as caching and request-response mapping.
  • Service Registry:
    •  A database of all available microservices, their endpoints, and metadata.
  • Circuit Breaker:
    • A mechanism that helps prevent cascading failures by interrupting communication between services when one service is not responding.
  • Service Monitoring:
    • A system that tracks the health and performance of microservices and generates alerts in case of failures or performance degradation.
  • Service Orchestration:
    • A layer that coordinates communication and interactions between microservices to ensure that they work together correctly.
  • Configuration Server:
    • A centralized repository for storing configuration information that is accessible to all microservices.

 Microservice Synchronous Call:

·        API Gateway Pattern

  • Sits between the client and multiple backend services and manages routing to internal MS.
  • API gateway can handle.

o Authentication/authorization/Protocol conversion/ rate limiting/Monitoring/Load balancing (Cross-Cutting Concern).

o Centralize features at single place rather than every MS.

o  It also aggregates multiple MS and compiles as a single response.

  • Single point of entry
  • Aggregate multiple MS to the client applications
  • Single endpoints are exposed to the client.
  • Working as a Routing request and reverse proxy
  • There are multiple API gateway patterns

o    Gateway Routing Pattern

o    Gateway aggregation pattern

o    Gateway offloading Pattern

 

·         Gateway Routing Pattern

  • Routing Microservice calls to internal MS.
  •  Route requests to multiple MS by exposing a single endpoint.
  • Multiple MS exposes through Single Endpoint & route them to the internal backend MS based on the request.
  • Gateway Service endpoint will configure internal MS.
  • The client can call multiple MS through Single endpoints.
  • No impacts if any changes to any behind MS

 

·         Gateway aggregation pattern

  • It aggregates multiple Individual MS (internal MS) into a single request to the client.

o    Client needs to perform multiple calls to different MS to build the response.

o    Aggregate patterns orchestrate the orchestration of MS response into a single output, and based on the client request, it sends the response back to the client.

  • Multiple MS exposes through a single endpoint
  • Gateway Service endpoint will configure internal web services.
  • The client can call Single use case (using multiple MS) through Single endpoints.
  • No impacts if any changes to any behind MS
  • avoid network calls and latency.
  • A single point of failure is a drawback.

 

·        Gateway offloading Pattern

  • It combines commonly used functions/features into proxy services (into API Gateway)

o    Cross-Cutting Concern

v  authentication/authorization

v  Protocol conversion

v  rate limiting

v  Monitoring

v  Load balancing

v  Logging

v  SSL certification.

  • Single point of implementation as MS
  • Individual MS will not be taken pain to implement these features.
  • Its usage in API gateway to only the public endpoint will be exposed.
  • Gateway will authenticate client requests and other features as needed.
  • avoid network calls and latency.
  • Make sure highly resilient and scalable.

 

·        Benefits for API Gateway:

  • This gateway aggregation pattern is to reduce chattiness in communication between
  • Cross-cutting concerns and gateway offloading.
  • API Gateway can aggregate multiple internal microservices into a single client request.
  • This routing feature provides a means to decouple client applications from the internal microservices, so it is separating responsibilities on the network layer.
  • API gateways provide abstraction over the backend microservices.

 

·        Working of API Gateway

  • API Gateway validates the incoming request after the API gateway check.
  • The IP address is into IP Allow listing or not. , API Gateway can provide allow listing and deny listing for IP-based protections.
  • After that API gateway passes the request to the identity provider in order to perform authentication and authorization. API Gateway receives an authenticated session. From the identity provider with validating token information.
  • After that API gateway checks the rate limiting based on the IP address and headers for the authenticated
  • session.
    • Example:
      •  API Gateway can reject requests from an IP address exceeding a certain rate.
  • And after that API gateway routing request to internal microservices with the Service Discovery feature,
  • it performs route matching and redirects requests to the specific microservices.
  • API Gateway also transforms requests to appreciate the protocol for internal microservices with the protocol

 

·        BFF (Backend for frontend) Pattern

  • Separate API gateway per specific frontend applications 
    • authentication/authorization/Protocol conversion/ rate limiting/Monitoring/Load balancing/logging/SSL certification (Cross-Cutting Concern).
  • Multiple API Gateways as per client applications (Mobile/Web-based/ etc)
  • Single point of implementation as MS
  • An API Gateway calls multiple MS API for a specific front end, and similarly, other API Gateways can call the same MS for different frontend applications. 
  • API can be tailored based on front-end needs as well, and those APIs can only serve that purpose.
  • Avoid a single point of failure.
  • Based on the client type API Gateway will be configured to support MS calls.

 

 



Benefits:

This is the client focused interfaces and provide minimal logic on the front-end code and streamlines. The data representation. It has well focused interface for the front-end client application. By applying Backend for Frontend API Gateways, client could become very easy and manageable, and every client has data focused, retrieve operation from the Backend for Frontend API Gateways.

 

Drawback:

The main drawback is increased latency. Since adding an additional layer to your network architecture, it results in increased latency, it is good for large-scale microservice applications, which have several client applications.

 

·        Service Aggregator Pattern

  • Get the request from the client or from API Gateway
  • Gateway pass request to the service aggregator 
  • Build a service aggregator that will call other MS and aggregate that data, and send a response back to API Gateway or Client 
  • Aggregator Service requests data from Several different MS 
  • Centralized logic into a specialized MS
  • Increase coupling and latency between Micro service.

 

·        Service Registry Pattern / Discovery Pattern

  • MS should register to Service registry.
  • Due to scaling and changing IP addresses, endpoints will keep changing.
    • Service location will keep changing.
    • To avoid fixed endpoints, MS should be registered in the registry.
  • Provide register and discover MS in the cluster.
    • The service discovery pattern uses a centralized server for the service registry to maintain a global view of microservice network locations, and microservices update their locations in the service registry at fixed intervals.
  • Get the request from the client or from the API Gateway
  • API Gateway pass request to Service discovery (based on IPs)
  • Get the MS endpoint with updated location (IPs) 
  • API gateway will call/route to the individual MS.
  • Any new MS should be registered into Registry. Every MS should be registered before exposure to the client.
  • Centralize logic into an API Gateway 

 

                   There are 2 main source discovery patterns available to implement service discovery for microservices.

o    client-side service registry,

o    server-side service registry.

 

There are 3 participants in the service discovery pattern.

o    Service Registry

o    Client / Consumers,

o    Microservices. 

Notes

  • Netflix provides a service discovery pattern called Netflix Eureka, which is available for Spring Boot Java applications.
  • The container orchestrator systems automatically handle the service discovery operation itself.
    • Example:
      • Kubernetes has a service definition that basically performs all these tasks after the definition of the Kubernetes service component.

  

 

·        Materialized View Pattern 

  • MVP stores its own local and denormalized copy of data in MS DB
  • MS should have a table that contains a denormalized copy of data from other MS.
  • eliminates expensive cross-service calls. 
  • These tables are only for read purposes / Read Model
  • It supports the entire operation with a single process.

v  example:

o    MS name is X, and it needs some data from another MS called Y

o    Get Y MS tables and move under X MS 

o    These Y MS tables in X MS is always read for their purpose and will get updates from Y MSin a timely manner.  

  • Use eventual Consistency, read tables will be updated from time to time. 
  • This pattern reduced the coupling and improved the reliability and response time by reducing latency.

 

 

 

·        CQRS (Command query, Responsibility and Segregation) Pattern

  • To avoid complex queries and get rid of heavy Joins.
  • Separate read and write operations by separating the database. 
  • The reading and writing database will have a different strategy to handle the request to support a large volume of data. 
  • to avoid lock on database while updating or adding any data into the tables.
  • Separate DB for single MS 

o    Any write operation will go to the write database. 

o    Any read operation will go through the read database. 

o    Keep noth the database in sync. 

o    Published an event from write DB, which will be subscribe by read DB and update the delta data (eventual consistency).

  • Maybe NoSQL and SQL databases for different purposes. 

o    where NOSQL as de-Normalized DB

o    and SQL as Normalized DB 

 

 

  •  Benefits

o    Scalability: When we separate read and write databases, we can also scale these databases separately and independently. Read databases follow the denormalized data to perform complex join queries, and also complex business logic is goes into the write database with this separation system, query performance is increased.

o    Performance: Implementing CQRS improves the application performance for all aspects.

o    maintainability and flexibility: The benefit of the CQRS is the flexibility of the system that can better evolve over time.

 

  • Drawback:

o    Complexity: CQRS makes any system's design more complex.

o    Eventual consistency: When we separate the read and write databases, the read data may be stay old and not be updated for a while.

 

·        SAGA Pattern

  • Manage data consistency across MS in distributed transaction cases.
  • Creates a set of transactions that updates MS sequentially and publishes an event to trigger the next transaction for the next MS.
  • If one of the steps failed, it triggersa rollback transaction.
  • It involves multiple MS, which is a series of local transactions that work together to achieve one of the use cases
  • Either overall transaction completions or each individual transaction should rollback to the initial state
  • Each MS have own tables and DB, so performing each step and doing some transaction for a specific MS and moving to the next, either REST call or triggers and events for successive MS.
  • Two types of SAGA patterns

·        Choreography SAGA

v  publish an event using Message Broker, One MS perform task and publishes an event, and successive MS will subscribe to these events and do the needful, and so on until the transaction is completed. 

·        Orchestration SAGA

v  Create an Orchestration MS, This Orchestration MS will communicate with each MS, collect data, and build a single response.

 

·        GraphQL:

GraphQL is a query and manipulation language for APIs and runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook and released in 2015. GraphQL allows clients to define the structure of the data required, and the same structure of the data is returned from the serve

GraphQL provides a complete and understandable description of data in our API and gives client the power of to ask for exactly what they need and nothing more.

GraphQL provides access many resources in a single request, reducing the number of multiple network calls & bandwidth requirements. It enables to developers ask for exactly what is needed and get back predictable results.

GraphQL Protocols:

o    GraphQL use HTTP protocol, the same as the Rest API,

o    GraphQL uses the HTTP POST method when querying data.


The characteristics of GraphQL

o    the first characteristic is asked for what is needed, get exactly that. Send a GraphQL query to the API and get exactly as requested. GraphQL queries always return predictable results.

o    getting many resources in a single request. While Rest API required loading from multiple URLs, GraphQL APIs get all the data for the application's needs in a single request.

 GraphQL Core Concepts

o    A GraphQL schema is made up of object types that define which kind of object can be requested and which fields. Clients send the queries, GraphQL validates the queries against the schema, and then executes the validated queries.

o    Resolver is a function that attaches to the fields in a schema. During the execution, the resolver is called to produce the value.

o    Mutation is a GraphQL operation that allows you to insert new data or modify the existing data on the server side.

 

 

Option 1:

It can be only 1 GraphQL schema that works as an API Gateway proxy, which can route the request to the targeted microservices and coerce their response. Microservices still would use REST / Thrift protocol for communication, though.

Having 1 GraphQL Schema as an API Gateway will have a downside where every time you change your microservice contract input/output,  It must change the GraphQL Schema accordingly on the API Gateway Side.



Option 2:

To have multiple GraphQL schemas, one per microservice. Having a smaller API Gateway server that routes the request to the targeted microservice with all the information of the request + the GraphQL query.

Using Multiple GraphQL schemas per microservices, make sense in a way because GraphQL enforces a schema definition, and the consumer will need to respect the input/output given from the microservice.

Advantages:

GraphQL is faster than other communication APIs like REST API because it reduces to multiple calls

Disadvantage:

GraphQL query is very complex. GraphQL has just a simple query language. When a query is requested, the server performs database access. When accessing multiple fields in one query. It may request many nested field data at a single time. That could cause performance problems to avoid recursive queries. Can not cache data through GraphQL. 


·        gRPC: There are 2 types of APIs when designing synchronous communication in a microservice architecture. It transports binary messages.
o    Public APIs: 
o    Public APIs should be aligned with the client request, and the client can be a web browser or a mobile device.
o    Public API should use RESTful APIs over the HTTP protocol for application requests.
o    Backend APIs:
o    Backend APIs must consider network performance instead of easily readable JSON payloads. 
o    In the case of backend APIs, inter-service communication can result in a lot of network traffic. For that reason, serialization speed and payload size become more important.
o    Backend APIs support these protocols and binary serialization.
A problem during communication with the backend API, the problem is that inter-service communication makes heavy load on network traffic. Network performance happens during inter-service communication and backend communication, and real-time communication can bring down overall application performance. Business teams decide performance and streaming requirements.
 
For example:
Client requests to backend microservices, but those microservices require calling other microservices within the microservices. Adding item into the shopping cart that need to be calculated with up-to-date discount information. Shopping cart microservices expose API to the client. When the client invokes this API to adding item to a shopping cart. Shopping cart microservices go to invoke discount microservices to get the latest discount information before adding items to the shopping cart to calculate the latest prices.
 
The best practice for communication:
To use the Rest API between the client and the server, which is the Public APIs, and use gRPC between inter service communication to be performant. Use gRPC APIs, which are a scalable and fast API, and are able to develop different technologies with the RPC framework. This will also be technology agnostic.
 
gRPC is an open-source remote procedure call system, initially developed at Google. gRPC is a framework to efficiently connect services and build distributed systems. It is focused on high performance and uses the HTTP/2 protocol to transport binary messages. It relies on the protocol buffer language to define service contracts. Protocol buffers, also known as protobuf, allows to define the interface to be used in service-to-service communication regardless of the programming language. It generates cross-platform client and server bindings for many languages.
 
Most common usage scenarios include connecting services in microservice style architecture and connecting mobile devices and browser clients to backend services. The gRPC framework allows developers to create services that can communicate with each other efficiently and independently of their preferred programming language. Once a contract with a Protobuf file, this contract can be used by each service to automatically generate the code that sets up the communication infrastructure. This feature simplifies the creation of service interactions and, together with high performance, makes gRPC the ideal framework for creating microservices and communicating between inter-service communications.
How it works
gRPC, a client application can directly call a method on a server application on a different machine, as if it were a local object. It's making it easier for you to build distributed applications and services. As with many RPC systems, gRPC is based on the idea of defining a service that specifies methods, and these can be called remotely with their parameters and return types.
 
On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a setup that provides the same methods as the server gRPC client, and servers can work and talk to each other in different environments, from servers to your own desktop applications, which can be written in any language that gRPC supports.

 
 
gRPC server, written by the C++ service, and 2 gRPC clients, which are written with Ruby and Java clients, and these are the client applications. Create a gRPC server in Java or C-sharp with clients in Go, Python, or Ruby; all different applications can communicate over the protocol to request which have defined in protobuf files.
o    Proto Request
o    Proto Response
 
 Tech Stack for Microservices:
Front-end applications,
o    Angular,
o    Vue
o    React JS.
 
API gateway technology choices.
o    Kong Gateway,
o    Tyk API Gateway,
o    Express API Gateway
o    Amazon API Gateway
o    APIGEE
 
Technology choices for service discovery.
o    Netflix Eureka: if we deploy our microservices on the container orchestrator.
o    Kubernetes: handle service Discovery operation
o    Serverless Orchestration: handle the service discovery by creating containers in the same availability zones.

 

 Microservice Asynchronous Call: