Circuit Breaker Pattern

0 Shares
0
0
0

The Circuit Breaker pattern is one of the most popular design patterns used in Microservices architecture. In our previous article, we discussed the Benefits of Microservices Architecture, including scalability, fault tolerance, resilience, etc. If we look at the flip side, microservices can make the architecture brittle as each user action invokes multiple remote service calls over the network, which works perfectly if all the services are up and running.

Consider if one or more services goes down or exhibits high latency issues or timeouts, it can result in cascading failures across the entire application. Usually, in this case, the retrying calls can solve the issue.

However, suppose the faults are due to unanticipated events and without knowing the significance of the issue. In that case, too many continuous retries may bring down the microservice entirely and exhaust the network resources causing bad performance. In these circumstances, it is pointless to retry the operations continuously that are unlikely to succeed.

What is the purpose of the Circuit Breaker Pattern?

The purpose of the Circuit Breaker pattern is different from the Retry pattern. The Retry pattern enables the application to retry an operation with the expectation that it will succeed. The Circuit Breaker design pattern is the solution in case of any catastrophic cascading failure across multiple services. It prevents an application from performing an operation that is going to fail. The circuit breakers build a fault tolerance and resilient system that can survive gracefully when services are unavailable or have higher latency issues.

How does the Circuit Breaker Pattern work?

The basic idea behind the circuit breaker software pattern is very straightforward. A circuit breaker acts as a proxy and monitors the number of recent failures that have occurred.

Using this pattern, the client will invoke a remote call to the service through a proxy, and the proxy acts as a circuit breaker. So when the number of failures crosses the threshold provided, the circuit breakers trip for a specific time and blocks the request. After the timeout, the circuit breaker allows limited calls to the service and checks if the requests are successful; if yes, it resumes backs to regular operation. Otherwise, in case of failures continue then it again blocks the request for a specific time.

Different States of Circuit Breaker

  • Closed
  • Open
  • Half Open

Closed State

When everything is normal, the circuit breakers remained closed, and all the request passes through to the services as shown below if the number of failures increases beyond the threshold the circuit breaker trips and goes into an open state.

 Circuit Breaker - Closed State
Courtesy: halodoc

Open State

In this state circuit breaker returns an error immediately without even invoking the services. The Circuit breakers move into the half-open state after a timeout period elapses. Usually, it will have a monitoring system where the timeout will be specified.

 Circuit Breaker - Open State
Courtesy: halodoc

Half Open State

In this state, the circuit breaker allows a limited number of requests from the Microservice to passthrough and invoke the operation. If the requests are successful, then the circuit breaker will go to the closed state. However, if the requests continue to fail, then it goes back to Open state.

 Circuit Breaker - Half Open State
Courtesy: halodoc

Most of the technologies like .NET, Python, Node.JS, Java etc has their own Circuit breaker libraries/packages.

  • Netflix Hystrix is a popular latency and fault tolerance library designed to isolate access points to remote systems, services, and third-party libraries, stop cascading failure, and enable resilience in complex distributed systems where failure is inevitable.
  • Polly is a .NET resilience and transient-fault-handling library. It is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.
  • Istio is a service mesh, a configurable infrastructure layer for a Microservices application. It makes communication between service instances flexible, reliable, and fast and provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Polly and Hystrix are primarily classified as “Fault Tolerance” tools.

Some of the features offered by Polly are:

  • Bulkhead Isolation
  • Timeout
  • PolicyWrap

On the other hand, Netflix Hystrix provides the following key features:

  • Latency and Fault Tolerance
  • Realtime Operations
  • Concurrency
0 Shares
Leave a Reply

Your email address will not be published.