mobile logo

Search

Reactive Microservices Architecture: How to Keep Your Messages Flowing at Scale 

Author: Justin Cooke, Principal Cloud Architect 

One of the concerns we often hear from our clients is ‘how do I guard against locking up my architecture when calls stretch across multiple services’ and a common solution is to utilise an asynchronous event drive model, where message senders are not required to wait to receive a response for each request. In these systems, the senders can be returned a link to where the result will be sent when it is ready or alternatively expect a response or signal to be published to a topic which they are subscribed to. 

These methods do not suit all circumstances, when you have many systems working together over an asynchronous flow it can be difficult even for experienced engineers to trace the flow of the application. Furthermore, user interfaces or legacy services and systems may be firmly rooted in the synchronous mode. The immediate feedback offered through synchronous calls is very convenient and often far simpler to implement and build around. However, all that waiting and blocking can be expensive in terms of performance and reliability. 

Some folks have tried alternate approaches to mitigate these issues by building a synchronous transaction across multiple microservices with asynchronous aspects. One such method is to make use of the concept of the ‘callback’ as part of the software code where the caller passes a function which the recipient calls when it completes its task. This can be further extended using concepts such as futures, however, these methods do not scale very well. The code quickly becomes almost impossible to follow and therefore costly to maintain and further extend. This type of approach can result in all kinds of problems which are very hard to assure against. 

Fortunately, a programming paradigm exists called reactive programming which when utilised correctly facilitates true non-blocking IO while keeping the codebase readable, testable and maintainable. It allows us to model complex chains of transactions in a very similar format and approach to standard Java 8 streams which have the advantage of being well established with the Java development community. This allows us to call external APIs, transform data and manage demand in a logical easy to read, step by step manner. 

Underneath the hood, the reactor engine is handling all the complicated asynchronous logic which prevents threads from getting blocked while waiting for other systems and processes to respond, particularly external APIs, cloud storage and databases. The reactive core framework is well integrated with Spring’s WebFlux model which replaces Spring’s synchronous blocking RestTemplate with the new non-blocking WebClient. There is excellent unit and integration testing support for working with reactor and Spring which increases confidence in the quality of the features and capabilities delivered. 

By utilising the reactive paradigm with Spring WebFlux, Docker and Kubernetes we have had great success with a very large online retailer. Delivering an efficient and scalable microservices based solution for carrier integrations to enable the customer journey and fulfilment processes with a minimum of costly cloud server resources.