Microservice Architecture
Description
This is a microservice architecture created to be used by students as a basis for their own applications. The architecture uses a single-node K3s Kubernetes-cluster as a base and can be extended if necessary. For external access to the deployed services a Traefik Ingress Controller is used. The architecture comes equipped with a basic authentication service and additionally a Linkerd service mesh is implemented for better inter service communication.
Overview
For a better understanding, the following overview depicts the different components that make up the microservice architecture.
Kubernetes: Container orchestration tool used to deploy and maintain the applications of the architecture in pods.
Traefik: The gateway to the architecture. It reroutes incoming requests to the existing service instances.
Auth-Service: Used to create accounts in the system. Additionally enables existing users to perform logins and returns an auth-token that is used to authenticate subsequent requests to the microservice application.
Linkerd: A Service Mesh that provides reliability, monitoring, and security aspects for meshed applications.
Installation
To install the architecture in a new environment, a Linux OS is needed. In the following steps the process is shown of how to setup the architecture on an Ubuntu server.
Cloning the Repository
Before the setup can be done, this repository needs to be cloned to your server, so all the files are available for use.
git clone https://gitlab.reutlingen-university.de/poegel/microservice-architecture.git
Installation with the script
There are a few ways to install the architecture on a new server. The first one is the easiest, by simply running the install_msa.sh script. On Linux you can do that with the following command:
sudo bash install_msa.sh
This script installs all the tools necessary to run the architecture on your Ubuntu server. After that, the base microservice architecture is ready for use and you can start implementing your own applications.
Manual installation
If the script doesn't work, or you prefer installing everything manually, you can do so with the following steps.
First K3s needs to be installed on the server. Installing K3s and setting up a single-node cluster can be done with this command:
curl -sfL https://get.k3s.io | sh -
In the next step Linkerd has to be downloaded and installed. To download linkerd you can use this command:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
After that, Linkerd can be installed onto the K3s-cluster by executing these commands:
# add the linkerd CLI and K3S config to your path
export PATH=$PATH:/root/.linkerd2/bin
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# install linkerd onto the cluster
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
When Linkerd is installed and ready to use in the cluster, the configurations can be applied to create the needed resources in Kubernetes. This can be done by either applying all configuration files (/config) of the architecture manually with kubectl:
kubectl apply -f config_name.yml
When applying the files manually, make sure to follow the correct order in which the files are invoked. (Secret > PersistentVolume > Deployment > Service > ServiceProfile > IngressRoute)
Alternatively, the configigurations can all be applied at once by using Helm. For that, Helm first needs to be installed on the server, which on Ubuntu can be achieved with the following commands:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
After installing Helm, the provided Helm-Chart can be installed onto the cluster from the root folder of the repository with this command:
helm install helm-msa ./helm
This will create all needed resources in the cluster. After that you can implement your own services in the architecture.
Usage
When the base microservice architecture is running on your machine, it can be used to deploy your own applications.
Docker Containerization
In order to deploy an application in the K3s-cluster, it needs to be containerized first. If you need help with building container images, the Docker documentation is a good spot for guides and even language specific templates.
After containerizing your application, it should be pushed into a container registry like DockerHub, from where the image can be universally retrieved.
Kubernetes Resources
In order to run an app in the cluster, Kubernetes uses resources like Pods and Deployments, that can be created by providing YAML configuration files that describe them.
The Pods, in which a instance of your containers is running, can then be made accessible for other applications by creating Services for them.
If the functionality of the application needs to also be accessible from outside the cluster for external clients, its APIs can be exposed through a IngressRoute resource. By referencing the 'fw-auth-mw'-middleware in the rules of the IngressRoute, you can protect your interface, so only authenticated users can access it.
For examples of the named resources an example-app branch is provided, in which a small demo application is deployed in the architecture.
Interaction with the Cluster
For the interaction with the K3s cluster kubectl is used. Kubectl commands are executed over CLI and are used to manipulate the cluster and to get information about it.
Linkerd Service Mesh
To include a deployed application in the Linkerd Service Mesh, the following annotation has to be added in the Pod/Deployment configuration file:
spec:
template:
metadata:
annotations:
linkerd.io/inject: enabled
Alternatively, deployments can be injected with the following kubectl command:
kubectl get deploy your-service-deployment -o yaml | linkerd inject - | kubectl apply -f -
To revert the injection, use this command:
kubectl get deploy your-service-deployment -o yaml | linkerd uninject - | kubectl apply -f -
Linkerd Dashboard
If you want to use the Linkerd dashboard to get a better overview over the architecture and the applications deployed in it, you first have to download and configure it.
If you're running the cluster locally, you can just execute the follwing two commands to access it over the browser:
# install the linkerd viz extension
linkerd viz install | kubectl apply -f -
# open the dashboard
linkerd viz dashboard
By default, the dashboard is only accessible from localhost. If you want to deploy the architecture in the cloud and still need to access the dashboard, the installation file has to be edited first.
# download the linkerd viz installation yaml:
linkerd viz install > install_viz.yaml
# edit install.yaml
nano install.yaml
# change enforced-host to "- -enforced-host=*." and save the file
# apply the edited file
kubectl apply -f install_viz.yaml
# open linkerd viz dashboard (enter the address of your host)
linkerd viz dashboard --address x.x.x.x
Authentication Service
The microservice architecture comes with an authentication service that can be used to create user accounts and authenticate requests.
The functionality of the REST-API for the auth-service is depicted in the following table:
Method | Path | Description | JWT Required | Admin Required | Input |
---|---|---|---|---|---|
POST | /blacklist/cleanup | Deletes unnecessary tokens from the blacklist | yes | yes | - |
GET | /health | Returns a 200-statuscode | no | no | - |
POST | /login | Returns a JWT that is used to authenticate subsequent requests | no | no | user, password |
POST | /logout | Invalidates a JWT by putting it on the blacklist | yes | no | - |
POST | /user | Creates a new user | yes* | yes* | user, password, roles ("r1, r2") |
DELETE | /user | Deletes an existing user | yes | no** | user |
GET | /verify | Verifies a JWT | yes | no | - |
After logging in, the service returns a JWT, in the "jwt" header, that is used to authenticate subsequent requests to the application. The JWT is sent in the authorization header of each request in the form of a Bearer Token. All inputs are submitted as form-data. The submitted roles while creating a new user are divided by colons.
*: The request to create the first user doesn't require a JWT and the new user is automatically declared administrator. Every creation after the first user requires an admin token.
**: Every user can delete only his/her own account, while admin-accounts can delete any user.
Requests, that require authentication, are verified by the auth-service that then adds three headers to the response, which are passed on to the receiving application. The headers contain information about the user who sent the request that can be accessed in the applications.
The Headers are:
Header | Description |
---|---|
"userid" | The username of the account that sent the request |
"isadmin" | The admin status of that user, either true or false |
"userroles" | The roles of the user |
These headers can be read in the applications and can be used to implement an authorization.