Monzo is a digital, mobile-only, bank based in the United Kingdom. It is one of the earliest app-based digital banks in the UK. Monzo set a record for the quickest crowd-funding campaign in history raising 1 million pounds in 96 seconds via the CrowdCube investment platform. This year it has announced to set its foot in the United States.

The bank has released its mobile apps on both the Android & the IOS platforms. Payments made via the bank cards trigger push-notifications to the customer’s mobile phone running the app. The app enables its users to view past transactions, freeze the card with just a tap, get an overview of their spending habits. Customers can also view the location of the transaction they made on a map along with the logo & details of the company/merchant they made their transaction with.

For more insight into the modern banking & financial apps do read Open Banking Architecture – Build Fintech Apps Consuming the Open APIs

This write-up is an insight into the backend infrastructure of Monzo, we’ll have a look into the tech stack they use to scale their service to the millions of their customers online.

So, without any further ado. Let’s get on with it.

 

Introduction

A few key things for a Fintech service are it has to be available 24/7, has to be consistent, extensible, performant to handle concurrent transactions, execute daily batch processes. It has to be fault-tolerant, there should be no single points of failure.

Keeping all these things in mind the developers at Monzo, right from the start, chose the microservice architecture over a monolithic one.

8bitmen.com Microservices architecture

 

Microservices enable businesses to scale, stay loosely coupled, move fast, stay highly available, teams can take the ownership of individual services, roll-out new features within a minimum time span. The dev team at Monzo also learnt from the experiences of large-scale internet services like Twitter, Netflix, Facebook that a monolith is hard to scale.

Read How Uber Scaled From A Monolith To A Microservice Architecture

Since the business wanted to operate in multiple segments of the market having a distributed architecture was the best bet. The beta version was launched with about 100 services.

 

Key Areas To Focus In the System Development & Production Deployment

To ensure a smooth service, there were four primary areas to focus on. Cluster management, Polyglot services, RPC transport, Asynchronous messaging.

 

Cluster management

A large number of servers had to be managed with efficient work distribution and contingencies to machine failure. The system had to be fault-tolerant and elastic. Multiple services could run on a single host to make the most out of the infrastructure.

The traditional approach of manually partitioning the service wasn’t scalable and tedious. They relied on a cluster scheduler for the efficient distribution of the tasks across the cluster based on the resource availability and other factors.

After running Mesos and Marathon for a year, they switched to Kubernetes, used Docker containers. The entire cluster ran on AWS. The switch to Kubernetes cut down their deployment costs by a significant amount, by upto 65 to 70%. Prior to Kubernetes, they ran Jenkins hosts that were inefficient and expensive.

 

Polyglot Services

The team used Go to write their low latency and highly concurrent service. Having a microservice architecture enabled them to leverage other technologies.

For sharing data across the services they used Etcd, it’s an open-source distributed key-value store, written in Go, that enables the microservices to share data in a distributed environment. Etcd handles leader elections during network partitions and has tolerance for machine failure.

 

RPC

Since the services were implemented with varied technologies, to facilitate efficient communication between them developers at Monzo wrote an RPC layer using Finagle & used Linkerd as a service mesh.

The layer has features like load balancing, automatic retries in case of service failure, connection pooling, routing the requests to the pre-existing connections as opposed to creating new ones, splitting & regulating the traffic load on a service for testing.

Finagle has been used in production at Twitter for years and is battle-tested. Linkerd is a service mesh for Kubernetes

 

Asynchronous Messaging

Asynchronous behaviour features are commonplace in modern Web 2.0 apps. In the Monzo app, push notifications, payment processing pipeline, loading the user’s feed with transactions all happened asynchronously powered by Kafka.

The distributed design of Kafka enabled the team to scale the async messaging architecture on the fly, keep it highly available, keep the messaging data persistent to avoid data loss in case of a message queue failure. The messaging implementation also enabled the developers to go back in time, have a look at the events that occurred in the past in the system from a point in time.

 

Cassandra As A Transactional Database

Monzo uses Apache Cassandra as a transactional database for the presently running 150+ microservices hosted on AWS. Well, this got me thinking. For managing transactional data two things are vital ACID & Strong consistency.

Apache Cassandra is an eventual consistent wide column NoSQL datastore, has a distributed design; it is built for scale.

How exactly Apache Cassandra handles the transactions?

Well, first, picking a technology, largely depends on the use case. I searched around a bit. It appears we can pull off transactions with Cassandra but there are a lot of ifs and buts. Cassandra transactions are not like regular RDBMS ACID transactions. 

This & this StackOverflow threads & this YugaByte DB article are a good read on it.

To manage BigData Monzo uses Google’s BigQuery

Information source

 

Well, Guys!! This is pretty much it. If you liked the article, share it with your folks.
You can follow me on Twitter.
I’ll see you in the next write-up. Cheers!!

 

More On the Blog 

What Database Does Twitter Use? – A Deep Dive

How Does PayPal Process Billions of Messages Per Day with Reactive Streams?

Twitter’s migration to Google Cloud – An Architectural Insight

Facebook Real-time Chat Architecture Scaling With Over Multi-Billion Messages Daily

How Hotstar scaled with 10.3 million concurrent users – An architectural insight