Microservice Architecture - Part 1 (A running microservice architecture)
17 Feb 2019
This series of blog posts aims at helping students at the University of Geneva to develop their first application following micro-service principles.
Besides explaining the concepts and implementation details of micro-service architecture, we will as well discuss software development practices such as software
factories and innovative deployment options such as containers and container composition. All samples and a complete working application can be found here on GitHub
The following diagram represents the end-state of our microservice architecture. From a business perspective, it delivers RegTech services.
More specifically, it manages counterparties and financial instruments. It valuates a portfolio and finally provides some regulatory reporting.
You do not need deep financial knowledge, sufficient is to say that:
A counterparty is an individual or a company participating in a financial transaction. For more details.
A financial instrument is an asset that can be traded such as stocks, loans, and the likes. For more details.
Portfolio valuation is the action of evaluating the net value of a set of assets. For more details.
Financial institutions must comply to a set of regulations such as delivering monthly reports to state their financial health.
Besides, these “business” services, the architecture delivers a set of non-functional services such as:
A Central logging mechanism to deal with the distributed nature of the architecture. It relies on a Logspout companion container that sends the logs from all the containers to a concentrator called Logstash that in turn
sends them to a database optimized for searching called ElasticSearch. Finally, Kibana provides visualization and analysis of the logs.
A Message broker to increase service decoupling and scalability. Kafka in this case.
An API-Gateway that provides routing, load-balancing and SSO to the micro-services by integrating an identity manager called Keycloack. Furthermore, Kong delivers API-Gateway services (e.g., security, API composition and aggregation)
The API-Gateway also shields the user from knowing the ugly details of the network topology. Furthermore, it protects the backend by establishing a clear front vs back network separation,
it exposes static resources and finally, it provides TLS termination.
From a technology perspective, Microservices are implemented using JEE 8 microservice and its microprofile. More specifically, Thorntail .
Furthermore, microservices are packaged as Docker  container using Maven  as a build tool.
This chapter describes step by step how to compile and deploy the microservices themselves.
Part 2 describes how to setup non-functional services such as SSO (Single Sign On), API concentration, and logging. Because of its
distributed nature, in
a microservice architecture, the non-functional infrastructure is as important than the actual services.
Part 3 dives deeper in what a microservice architecture actually is, its benefits and drawbacks, and some details on the related technologies.
Part 4 focuses on the software factory, putting everything together and testing the result.
Finally, Part 5 does the autopsy of a microservice, detailing the associated design patterns.
Note: This series of blog post leverages a lot of different technologies. Please take the time to install everything properly. It will save time later on.
To execute the samples, you will need to install and to configure the following tools:
a “reasonably” powerful computer with Linux (whatever recent distribution) or Windows (min. Windows 10) to support Docker. Mac is ok as well, but it requires some additional steps that will not be described here.
a working Docker environment to deploy the services locally.
a bash interpreter (on Windows you can rely on Git bash that is usually installed with Git)
On top of that you need to have:
An intermediate level in Java
Some basic understanding of OS (including bash scripting) and networking (DNS, TCP, HTTP)
a great deal of patience and coffee
Note: We will start a lot of containers, please grant at least 6GB RAM and 6GB swap to your docker-machine
Getting the backend components to run
First things first, let’s checkout the code and compile everything. Before you start complaining,
yes, this section is tedious but we have to have the environment set up before diving into the wonderful world of microservices.
Let’s start by cloning the code from GitHub.
Let’s check that Maven and Java are correctly installed.
The next step is to compile the project to produce the artifacts (i.e., binaries) that are required. To that end,
we use Apache Maven. Maven is an opiniated build tool:
Opinionated Software is a software product that believes a certain way of approaching a business process is inherently
better and provides software crafted around that approach.
Namely, following its opinion makes our life easier and requires less efforts. For more information and tutorials please refer to this Maven Tutorial. The output of the build process is a set of “JAR” files (i.e., JAVA library) that are stored in your local ~/.m2 repository for later use.
Tip: Congratulations, you compiled the microservices.
At this point, you have manage to compile all of the Java code and you have created maven artifacts for each microservice (Java Archives a.k.a. JARs). However, as we continue, we will see in the next chapters that a micro-service architecture is much more than a bunch of micro-services. We will need a lot of additional 3rd party tools and services.
These additional services (e.g., logging, security) are usually provided as container images that runs on Docker.
To be able to run the microservices along side these “3rd” party tools, we need to package the microservice as Docker images.
Simply put, Docker provides lightweight virtualization. It has a smaller footprint compare to the usual Virtual Machine approaches (Virtual Box, VM Ware).
The main difference is that the OS system layer is not replicated in each container but rather shared.
Docker containers run Docker images that are merely lightweight Linux systems with additional softwares. For more about Docker
Let’s first check whether Docker is properly installed.
Before we continue, let’s have a look at a docker survival kit:
docker ps displays all the running containers.
docker ps -adisplays all stopped container (not running but still using some resources).
Based on that :
we can kill all running containers by running docker kill $(docker ps -q)
remove all stopped containers by running docker rm $(docker ps -a -q)
finally docker system prune cleans up all dangling data.
So the Docker daemon is up and running. Let’s create the Docker images for the microservices. This step will reuse the
JAR files created previously and package them along a Linux system so that every image can be run independently.
All the Docker images for the microservices have been created. Let’s double check:
Tip: You now have Docker images for your microservices. At this point, we have Docker images for the microservices and for the api-gateway.
Let’s start a Docker container with the counterparty microservice and map the port 8080 of the container to the port 10080 of the host.
In principle, this will start a Linux OS and then start the microservice as the first process (PID 1). This container provides all the service, you would
expect from any Linux system such as network, security, and isolation.
Tip: Open a browser and navigate to http://localhost:10080/counterparies It will display a long list of counterparties.
This demonstrates that a web services is listening on port 10080 of localhost. More specifically, we started a container with the image of the counterparty microservice. The port 8080 is mapped to port 10080 so that we can test it.
Furthermore, we named the container myCounterpartyService.
As it is a fully running Linux system, you can connect to the container to inspect it. In another console, we can run a docker ps command to list running containers.
As you can see, there is one running container name myCounterpartyService that listen on port 10080 of localhost.
Let’s test it by connecting to http://localhost:10080/counterparties/724500J4K3Q60O9QLF45 either by using a browser or the curl command line. counterparties is the context name of the service and 724500J4K3Q60O9QLF45is the id of one particular counterparty we want the details on.
We can stop the service as follow:
And check that nothing is running anymore:
So far we only ran one service, to run all the microservices (plus the message broker) we will compose the images by using docker-compose.
docker-compose is a way to script a series of complex Docker configuration to provide a coherent ecosystem.
In another console, check the running containers
Now we are ready to test the microservices. Let’s check again that we can query counterparties.
Then let’s get a specific instrument
Next, we will propagate all the instruments to the message broker for the valuation service to read them and compute the actual valuation.
This is the actual result of the valuation of the portfolio.
Tip: Congrats, you just got all the microservies and the message broker running.
 N. Poulton, (2017) Docker Deep Dive
 Turnbull, J. (2014). The Docker Book: Containerization is the new virtualization.
 Mauro Vocale, Luigi Fugaro (2018). Hands-On Cloud-Native Microservices with Jakarta EE