Skip to content
Snippets Groups Projects
Commit 56e7503d authored by Nico Pögel's avatar Nico Pögel
Browse files

Updated README.md with troubleshooting paragraph concerning the Kubernetes path variable.

parent 86d2cd4b
Branches master
No related tags found
No related merge requests found
# Microservice Architecture
## Description
This is a microservice architecture created to be used by students as a basis for their own applications. The architecture uses a single-node K3s Kubernetes-cluster as a base and can be extended if necessary. For external access to the deployed services a Traefik Ingress Controller is used. The architecture comes equipped with a basic authentication service and additionally a Linkerd service mesh is implemented for better inter service communication.
## Overview
For a better understanding, the following overview depicts the different components that make up the microservice architecture.
![msa_concept.png](./assets/msa_concept.png)
Kubernetes: Container orchestration tool used to deploy and maintain the applications of the architecture in pods.
Traefik: The gateway to the architecture. It reroutes incoming requests to the existing service instances.
Auth-Service: Used to create accounts in the system. Additionally enables existing users to perform logins and returns an auth-token that is used to authenticate subsequent requests to the microservice application.
Linkerd: A Service Mesh that provides reliability, monitoring, and security aspects for meshed applications.
CORS-Handler: Can be configured to allow access and answer preflight-requests from external web applications.
## Installation
To install the architecture in a new environment, a Linux OS is needed. In the following steps the process is shown of how to setup the architecture on an Ubuntu server. For development purposes, this architecture can also be run locally on a Windows machine by using WSL (Windows Subsystem for Linux). For more information see the [WSL-Setup](#wsl-setup) section before installing the architecture.
### Cloning the Repository
Before the setup can be done, this repository needs to be cloned to your server, so all the files are available for use.
```
git clone https://gitlab.reutlingen-university.de/poegel/microservice-architecture.git
```
### Installation with the script
There are a few ways to install the architecture on a new server. The first one is the easiest, by simply running the [install_msa.sh](install_msa.sh) script. On Linux you can do that with the following command:
```
sudo bash install_msa.sh
```
This script installs all the tools necessary to run the architecture on your Ubuntu server. After that, the base microservice architecture is ready for use and you can start implementing your own applications.
### Manual installation
If the script doesn't work, or you prefer installing everything manually, you can do so with the following steps.
First K3s needs to be installed on the server. Installing K3s and setting up a single-node cluster can be done with this command:
```
curl -sfL https://get.k3s.io | sh -
```
Depending on the current logged in user in Ubuntu, or rather when not using root, it may be necessary to change the permission of the k3s.yaml in order to be able to interact with the cluster without granting administrator rights for every single command. This step isn't necessary when the root user is used in Ubuntu. In order to change the access permissions persistently, the changes can be applied to the k3s.service.env file, so they always get applied, even when k3s gets restarted. For this to work `K3S_KUBECONFIG_MODE=644` has to be added to the k3s.service.env file. This can for example be achieved by executing:
```
sudo echo K3S_KUBECONFIG_MODE=\"644\" >> /etc/systemd/system/k3s.service.env
```
After adding the line to the configuration file, k3s needs to be restarted for the changes to take effect.
```
sudo systemctl restart k3s
```
In the next step Linkerd has to be downloaded and installed. To download linkerd you can use this command:
```
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
```
After that, Linkerd can be installed onto the K3s-cluster by executing these commands:
```
# add the linkerd CLI and K3S config to your path
export PATH=$PATH:/root/.linkerd2/bin
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# install linkerd onto the cluster
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
```
When Linkerd is installed and ready to use in the cluster, the configurations can be applied to create the needed resources in Kubernetes. This can be done by either applying all configuration files (/config) of the architecture manually with kubectl:
```
kubectl apply -f config_name.yml
```
When applying the files manually, make sure to follow the correct order in which the files are invoked. (Secret > PersistentVolume > Deployment > Service > ServiceProfile > IngressRoute)
Alternatively, the configigurations can all be applied at once by using Helm. For that, Helm first needs to be installed on the server, which on Ubuntu can be achieved with the following commands:
```
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
```
After installing Helm, the provided Helm-Chart can be installed onto the cluster from the root folder of the repository with this command:
```
helm install helm-msa ./helm
```
This will create all needed resources in the cluster. After that you can implement your own services in the architecture.
Additionally, should you want to deploy your application in it's own namespace, the Traefik Deployment configuration has to be edited to allow for cross namespace access.
To achieve this the following command can be issued:
```
kubectl get deploy traefik -n kube-system -o yaml > traefik.yaml && sed -i '/providers.kubernetescrd/ a\ \ \ \ \ \ \ \ - --providers.kubernetescrd.allowCrossNamespace=true' traefik.yaml && kubectl replace -f traefik.yaml && rm traefik.yaml
```
### WSL-Setup
When developing on a machine running Windows, the fastest and easiest way to get the architecture running locally is by using WSL (Windows Subsystem for Linux). WSL allows running a Linux environment without the need for a seperate virtual machine or dual booting. WSL 2, which comes with a full Linux Kernel and is therefore compatible with many Linux binaries, can either be automatically installed together with [Docker Desktop](https://www.docker.com/products/docker-desktop/), or separate from it manually.
To install WSL manually, the following command can be executed from PowerShell or another CLI running as administrator.
```
wsl --install
```
#### Prerequisites
In order for the installation script to work correctly a Ubuntu distribution is needed. Depending on the time or way of installing WSL on a system, it may already come with Ubuntu by default. To check if Ubuntu is already installed the following command can be run.
```
wsl -l -v
```
If Ubuntu isn't already installed on the machine, it can either be installed through the Microsoft Store or by executing the following command.
```
wsl --install -d <Distribution Name>
```
The `<Distribution Name>` has to be replaced with the distribution of your choice. To see a list of available Linux distributions available for download, run:
```
wsl -l -o
```
#### Troubleshooting
When using WSL to run the architecture, it has to be ensured that systemd is activated on the machine in order for the K3s cluster to be correctly installed on the system. While newer versions of WSL already come with systemd enabled by default, for older versions this my need to be done manually.
To check if systemd is already running in WSL, the following command can be executed inside the WSL-console.
```
systemctl list-unit-files --type=service
```
Should the systemd service not be listed, it has to be enabled inside the wsl configuration file. Before doing this, make sure WSL is running on the latest version by running:
```
wsl --update
```
To enable systemd for WSL the wsl.conf file has to be edited. It can befound in `/etc/wsl.conf`. This can for example be done by opening it with an editor with sudo privileges, eg: `sudo nano /etc/wsl.conf`. The Following lines need to be added to the file:
```
[boot]
systemd=true
```
Should the wsl.conf file not already exist, it can be created first. After adding the lines, the file needs to be saved and WSL needs to be restarted. After closing all WSL windows, run the following command inside a CLI on Windows:
```
wsl.exe --shutdown
```
After this, when reopening a WSL-console, systemd should be running and the installation of the architecture can be resumed.
## Usage
When the base microservice architecture is running on your machine, it can be used to deploy your own applications.
### Docker Containerization
In order to deploy an application in the K3s-cluster, it needs to be containerized first. If you need help with building container images, the [Docker documentation](https://docs.docker.com/get-started/) is a good spot for guides and even language specific templates.
After containerizing your application, it should be pushed into a container registry like [DockerHub](https://hub.docker.com/), from where the image can be universally retrieved.
### Kubernetes Resources
In order to run an app in the cluster, Kubernetes uses resources like [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) and [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), that can be created by providing YAML configuration files that describe them.
The Pods, in which an instance of your containers is running, can then be made accessible for other applications by creating [Services](https://kubernetes.io/docs/concepts/services-networking/service/) for them.
If the functionality of the application needs to also be accessible from outside the cluster for external clients, its APIs can be exposed through an [IngressRoute](https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute) resource. By referencing the 'fw-auth-mw'-middleware in the rules of the IngressRoute, you can protect your interface, so only authenticated users can access it.
For examples of the named resources an example-app branch is provided, in which a small demo application is deployed in the architecture.
#### Deploying a frontend
While deploying a frontend inside the cluster isn't a problem at all, a few things have to be considered to avoid running into possible issues.
Technically a frontend application can be exposed through the Ingress, just as a backend service. However be sure to not serve your content on the default root path like: `http://domain.com/` but rather on a altered root path like: `http://domain.com/myfrontend/`. This way it's easier to define routing rules in the ingress without the need for fancy tricks to avoid collisions.
Additionally note that when using a Service resource to expose your frontend, while it's technically possible to use any of the available Service-Types, using anything other than ClusterIP will change the origin of your requests from that frontend in a way that is considered to be external. This will lead to triggering CORS. More about this can be found in the section about the [CORS-Handler](#cors-handler). Therefore to avoid running into these issues, it's advised to use the ClusterIP Service-Type.
### Interaction with the Cluster
For the interaction with the K3s cluster [kubectl](https://kubernetes.io/docs/reference/kubectl/) is used.
Kubectl commands are executed over CLI and are used to manipulate the cluster and to get information about it.
### Linkerd Service Mesh
To include a deployed application in the Linkerd Service Mesh, the following annotation has to be added in the Pod/Deployment configuration file:
```
spec:
template:
metadata:
annotations:
linkerd.io/inject: enabled
```
Alternatively, deployments can be injected with the following kubectl command:
```
kubectl get deploy your-service-deployment -o yaml | linkerd inject - | kubectl apply -f -
```
To revert the injection, use this command:
```
kubectl get deploy your-service-deployment -o yaml | linkerd uninject - | kubectl apply -f -
```
### Linkerd Dashboard
If you want to use the Linkerd dashboard to get a better overview over the architecture and the applications deployed in it, you first have to download and configure it.
If you're running the cluster locally, you can just execute the follwing two commands to access it over the browser:
```
# install the linkerd viz extension
linkerd viz install | kubectl apply -f -
# open the dashboard
linkerd viz dashboard
```
By default, the dashboard is only accessible from localhost. If you want to deploy the architecture in the cloud and still need to access the dashboard, the installation file has to be edited first.
```
# download the linkerd viz installation yaml:
linkerd viz install > install_viz.yaml
# edit install.yaml
nano install.yaml
# change enforced-host to "- -enforced-host=*." and save the file
# apply the edited file
kubectl apply -f install_viz.yaml
# open linkerd viz dashboard (enter the address of your host)
linkerd viz dashboard --address x.x.x.x
```
### Authentication Service
The microservice architecture comes with an authentication service that can be used to create user accounts and authenticate requests.
The functionality of the REST-API for the auth-service is depicted in the following table:
| Method | Path | Description | JWT Required | Admin Required | Input |
|-|-|-|-|-|-|
| POST | /blacklist/cleanup | Deletes unnecessary tokens from the blacklist | yes | yes | - |
| GET | /health | Returns a 200-statuscode | no | no | - |
| POST | /login | Returns a JWT that is used to authenticate subsequent requests | no | no | user, password |
| POST | /logout | Invalidates a JWT by putting it on the blacklist | yes | no | - |
| POST | /user | Creates a new user | yes* | yes* | user, password, roles ("r1, r2") |
| DELETE | /user | Deletes an existing user | yes | no** | user |
| GET | /verify | Verifies a JWT | yes | no | - |
After logging in, the service returns a JWT, in the "jwt" header, that is used to authenticate subsequent requests to the application. The JWT is sent in the authorization header of each request in the form of a Bearer Token.
All inputs are submitted as form-data. The submitted roles while creating a new user are divided by colons.
*: The request to create the first user doesn't require a JWT and the new user is automatically declared administrator. Every creation after the first user requires an admin token.
**: Every user can delete only his/her own account, while admin-accounts can delete any user.
Requests, that require authentication, are verified by the auth-service that then adds three headers to the response, which are passed on to the receiving application. The headers contain information about the user who sent the request that can be accessed in the applications.
The Headers are:
| Header | Description |
|-|-|
| "userid" | The username of the account that sent the request |
| "isadmin" | The admin status of that user, either true or false |
| "userroles" | The roles of the user |
These headers can be read in the applications and can be used to implement an authorization.
### CORS-Handler
When trying to access the deployed services inside the architecture from external web applications, CORS (Cross Origin Ressource Sharing) comes into play to ruin the fun. While being there for security reasons, it requires users to manually configure their services to allow requests from different origins outside of the own domain.
This CORS-Handler is used inside the architectureto answer all preflight-requests from external sources to services using the Traefik-Ingress Controller and the authentication service.
Requests from specific origins can be permitted by configuring them inside the cors-config-map.yaml. By adding the origin address of the external web application under the allow.orgin data, access can be granted to an application. This may look like this:
```
allow.orgin: "http://exampledomain.com"
```
In the same file, allowed methods and headers can be configured in the same way.
Note that CORS may still have to be enabled in the individual deployed service itself, and that this handler only responds to the preflight requests created when sending authenticated requests containing credentials, such as an autorization header.
Configuring this CORS-Handler will only be necessary when using an external frontend that isn't deployed inside the architecture itself. But beware that using anything other than the ClusterIP service type for your deployed frontends will alter the actual origin and will end up triggering CORS.
### Special use-case: WSL
When the architecture is deployed locally, running on WSL, there are a few things that are noteworthy when interacting with the cluster.
Since by default WSL isn't exposed on localhost, in order to make calls to the services deployed inside the architecture the IP address of WSL is needed. To get the IP address of WSL, the following command can be executed from inside Ubuntu:
```
ip addr show dev eth0
```
The shown IP address can then be used to make calls to the cluster from the windows host machine.
Another thing worth mentioning when developing using WSL is that closing the WSL window stops the cluster. So in order to interact with the cluster, a WSL console has to be running.
After closing WSL and reopening it, the deployed services inside the pods might have stopped running. Kubernetes will detect this after a short while and restart the pods. To skip the wait until the automatic restart of the pods, this process can be acellerated, if there are deployment resources created for them, by restarting all the deployments of the cluster manually using following command:
```
kubectl rollout restart deployments -n default
```
the -n tag specifies the namespace of the deployments that should be restarted. When not using the default namespace for your own applications, simply change the parameter.
### Uninstalling the architecture
Should you want to uninstall the architecture, the easiest way to do so is by uninstalling K3s from the system altogether. This can be achieved with the following command:
```
/usr/local/bin/k3s-uninstall.sh
```
This command can also be useful should you run into any issues with the architecture along the way. By simply uninstalling K3s and executing the install script again, the architecture will be reset to it's default settings.
\ No newline at end of file
# Microservice Architecture
## Description
This is a microservice architecture created to be used by students as a basis for their own applications. The architecture uses a single-node K3s Kubernetes-cluster as a base and can be extended if necessary. For external access to the deployed services a Traefik Ingress Controller is used. The architecture comes equipped with a basic authentication service and additionally a Linkerd service mesh is implemented for better inter service communication.
## Overview
For a better understanding, the following overview depicts the different components that make up the microservice architecture.
![msa_concept.png](./assets/msa_concept.png)
Kubernetes: Container orchestration tool used to deploy and maintain the applications of the architecture in pods.
Traefik: The gateway to the architecture. It reroutes incoming requests to the existing service instances.
Auth-Service: Used to create accounts in the system. Additionally enables existing users to perform logins and returns an auth-token that is used to authenticate subsequent requests to the microservice application.
Linkerd: A Service Mesh that provides reliability, monitoring, and security aspects for meshed applications.
CORS-Handler: Can be configured to allow access and answer preflight-requests from external web applications.
## Installation
To install the architecture in a new environment, a Linux OS is needed. In the following steps the process is shown of how to setup the architecture on an Ubuntu server. For development purposes, this architecture can also be run locally on a Windows machine by using WSL (Windows Subsystem for Linux). For more information see the [WSL-Setup](#wsl-setup) section before installing the architecture.
### Cloning the Repository
Before the setup can be done, this repository needs to be cloned to your server, so all the files are available for use.
```
git clone https://gitlab.reutlingen-university.de/poegel/microservice-architecture.git
```
### Installation with the script
There are a few ways to install the architecture on a new server. The first one is the easiest, by simply running the [install_msa.sh](install_msa.sh) script. On Linux you can do that with the following command:
```
sudo bash install_msa.sh
```
This script installs all the tools necessary to run the architecture on your Ubuntu server. After that, the base microservice architecture is ready for use and you can start implementing your own applications.
### Manual installation
If the script doesn't work, or you prefer installing everything manually, you can do so with the following steps.
First K3s needs to be installed on the server. Installing K3s and setting up a single-node cluster can be done with this command:
```
curl -sfL https://get.k3s.io | sh -
```
Depending on the current logged in user in Ubuntu, or rather when not using root, it may be necessary to change the permission of the k3s.yaml in order to be able to interact with the cluster without granting administrator rights for every single command. This step isn't necessary when the root user is used in Ubuntu. In order to change the access permissions persistently, the changes can be applied to the k3s.service.env file, so they always get applied, even when k3s gets restarted. For this to work `K3S_KUBECONFIG_MODE=644` has to be added to the k3s.service.env file. This can for example be achieved by executing:
```
sudo echo K3S_KUBECONFIG_MODE=\"644\" >> /etc/systemd/system/k3s.service.env
```
After adding the line to the configuration file, k3s needs to be restarted for the changes to take effect.
```
sudo systemctl restart k3s
```
In the next step Linkerd has to be downloaded and installed. To download linkerd you can use this command:
```
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
```
After that, Linkerd can be installed onto the K3s-cluster by executing these commands:
```
# add the linkerd CLI and K3S config to your path
export PATH=$PATH:/root/.linkerd2/bin
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# install linkerd onto the cluster
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
```
When Linkerd is installed and ready to use in the cluster, the configurations can be applied to create the needed resources in Kubernetes. This can be done by either applying all configuration files (/config) of the architecture manually with kubectl:
```
kubectl apply -f config_name.yml
```
When applying the files manually, make sure to follow the correct order in which the files are invoked. (Secret > PersistentVolume > Deployment > Service > ServiceProfile > IngressRoute)
Alternatively, the configigurations can all be applied at once by using Helm. For that, Helm first needs to be installed on the server, which on Ubuntu can be achieved with the following commands:
```
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
```
After installing Helm, the provided Helm-Chart can be installed onto the cluster from the root folder of the repository with this command:
```
helm install helm-msa ./helm
```
This will create all needed resources in the cluster. After that you can implement your own services in the architecture.
Additionally, should you want to deploy your application in it's own namespace, the Traefik Deployment configuration has to be edited to allow for cross namespace access.
To achieve this the following command can be issued:
```
kubectl get deploy traefik -n kube-system -o yaml > traefik.yaml && sed -i '/providers.kubernetescrd/ a\ \ \ \ \ \ \ \ - --providers.kubernetescrd.allowCrossNamespace=true' traefik.yaml && kubectl replace -f traefik.yaml && rm traefik.yaml
```
### WSL-Setup
When developing on a machine running Windows, the fastest and easiest way to get the architecture running locally is by using WSL (Windows Subsystem for Linux). WSL allows running a Linux environment without the need for a seperate virtual machine or dual booting. WSL 2, which comes with a full Linux Kernel and is therefore compatible with many Linux binaries, can either be automatically installed together with [Docker Desktop](https://www.docker.com/products/docker-desktop/), or separate from it manually.
To install WSL manually, the following command can be executed from PowerShell or another CLI running as administrator.
```
wsl --install
```
#### Prerequisites
In order for the installation script to work correctly a Ubuntu distribution is needed. Depending on the time or way of installing WSL on a system, it may already come with Ubuntu by default. To check if Ubuntu is already installed the following command can be run.
```
wsl -l -v
```
If Ubuntu isn't already installed on the machine, it can either be installed through the Microsoft Store or by executing the following command.
```
wsl --install -d <Distribution Name>
```
The `<Distribution Name>` has to be replaced with the distribution of your choice. To see a list of available Linux distributions available for download, run:
```
wsl -l -o
```
#### Troubleshooting
When using WSL to run the architecture, it has to be ensured that systemd is activated on the machine in order for the K3s cluster to be correctly installed on the system. While newer versions of WSL already come with systemd enabled by default, for older versions this my need to be done manually.
To check if systemd is already running in WSL, the following command can be executed inside the WSL-console.
```
systemctl list-unit-files --type=service
```
Should the systemd service not be listed, it has to be enabled inside the wsl configuration file. Before doing this, make sure WSL is running on the latest version by running:
```
wsl --update
```
To enable systemd for WSL the wsl.conf file has to be edited. It can befound in `/etc/wsl.conf`. This can for example be done by opening it with an editor with sudo privileges, eg: `sudo nano /etc/wsl.conf`. The Following lines need to be added to the file:
```
[boot]
systemd=true
```
Should the wsl.conf file not already exist, it can be created first. After adding the lines, the file needs to be saved and WSL needs to be restarted. After closing all WSL windows, run the following command inside a CLI on Windows:
```
wsl.exe --shutdown
```
After this, when reopening a WSL-console, systemd should be running and the installation of the architecture can be resumed.
## Usage
When the base microservice architecture is running on your machine, it can be used to deploy your own applications.
### Docker Containerization
In order to deploy an application in the K3s-cluster, it needs to be containerized first. If you need help with building container images, the [Docker documentation](https://docs.docker.com/get-started/) is a good spot for guides and even language specific templates.
After containerizing your application, it should be pushed into a container registry like [DockerHub](https://hub.docker.com/), from where the image can be universally retrieved.
### Kubernetes Resources
In order to run an app in the cluster, Kubernetes uses resources like [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) and [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), that can be created by providing YAML configuration files that describe them.
The Pods, in which an instance of your containers is running, can then be made accessible for other applications by creating [Services](https://kubernetes.io/docs/concepts/services-networking/service/) for them.
If the functionality of the application needs to also be accessible from outside the cluster for external clients, its APIs can be exposed through an [IngressRoute](https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute) resource. By referencing the 'fw-auth-mw'-middleware in the rules of the IngressRoute, you can protect your interface, so only authenticated users can access it.
For examples of the named resources an example-app branch is provided, in which a small demo application is deployed in the architecture.
#### Deploying a frontend
While deploying a frontend inside the cluster isn't a problem at all, a few things have to be considered to avoid running into possible issues.
Technically a frontend application can be exposed through the Ingress, just as a backend service. However be sure to not serve your content on the default root path like: `http://domain.com/` but rather on a altered root path like: `http://domain.com/myfrontend/`. This way it's easier to define routing rules in the ingress without the need for fancy tricks to avoid collisions.
Additionally note that when using a Service resource to expose your frontend, while it's technically possible to use any of the available Service-Types, using anything other than ClusterIP will change the origin of your requests from that frontend in a way that is considered to be external. This will lead to triggering CORS. More about this can be found in the section about the [CORS-Handler](#cors-handler). Therefore to avoid running into these issues, it's advised to use the ClusterIP Service-Type.
### Interaction with the Cluster
For the interaction with the K3s cluster [kubectl](https://kubernetes.io/docs/reference/kubectl/) is used.
Kubectl commands are executed over CLI and are used to manipulate the cluster and to get information about it.
### Linkerd Service Mesh
To include a deployed application in the Linkerd Service Mesh, the following annotation has to be added in the Pod/Deployment configuration file:
```
spec:
template:
metadata:
annotations:
linkerd.io/inject: enabled
```
Alternatively, deployments can be injected with the following kubectl command:
```
kubectl get deploy your-service-deployment -o yaml | linkerd inject - | kubectl apply -f -
```
To revert the injection, use this command:
```
kubectl get deploy your-service-deployment -o yaml | linkerd uninject - | kubectl apply -f -
```
### Linkerd Dashboard
If you want to use the Linkerd dashboard to get a better overview over the architecture and the applications deployed in it, you first have to download and configure it.
If you're running the cluster locally, you can just execute the follwing two commands to access it over the browser:
```
# install the linkerd viz extension
linkerd viz install | kubectl apply -f -
# open the dashboard
linkerd viz dashboard
```
By default, the dashboard is only accessible from localhost. If you want to deploy the architecture in the cloud and still need to access the dashboard, the installation file has to be edited first.
```
# download the linkerd viz installation yaml:
linkerd viz install > install_viz.yaml
# edit install.yaml
nano install.yaml
# change enforced-host to "- -enforced-host=*." and save the file
# apply the edited file
kubectl apply -f install_viz.yaml
# open linkerd viz dashboard (enter the address of your host)
linkerd viz dashboard --address x.x.x.x
```
### Authentication Service
The microservice architecture comes with an authentication service that can be used to create user accounts and authenticate requests.
The functionality of the REST-API for the auth-service is depicted in the following table:
| Method | Path | Description | JWT Required | Admin Required | Input |
|-|-|-|-|-|-|
| POST | /blacklist/cleanup | Deletes unnecessary tokens from the blacklist | yes | yes | - |
| GET | /health | Returns a 200-statuscode | no | no | - |
| POST | /login | Returns a JWT that is used to authenticate subsequent requests | no | no | user, password |
| POST | /logout | Invalidates a JWT by putting it on the blacklist | yes | no | - |
| POST | /user | Creates a new user | yes* | yes* | user, password, roles ("r1, r2") |
| DELETE | /user | Deletes an existing user | yes | no** | user |
| GET | /verify | Verifies a JWT | yes | no | - |
After logging in, the service returns a JWT, in the "jwt" header, that is used to authenticate subsequent requests to the application. The JWT is sent in the authorization header of each request in the form of a Bearer Token.
All inputs are submitted as form-data. The submitted roles while creating a new user are divided by colons.
*: The request to create the first user doesn't require a JWT and the new user is automatically declared administrator. Every creation after the first user requires an admin token.
**: Every user can delete only his/her own account, while admin-accounts can delete any user.
Requests, that require authentication, are verified by the auth-service that then adds three headers to the response, which are passed on to the receiving application. The headers contain information about the user who sent the request that can be accessed in the applications.
The Headers are:
| Header | Description |
|-|-|
| "userid" | The username of the account that sent the request |
| "isadmin" | The admin status of that user, either true or false |
| "userroles" | The roles of the user |
These headers can be read in the applications and can be used to implement an authorization.
### CORS-Handler
When trying to access the deployed services inside the architecture from external web applications, CORS (Cross Origin Ressource Sharing) comes into play to ruin the fun. While being there for security reasons, it requires users to manually configure their services to allow requests from different origins outside of the own domain.
This CORS-Handler is used inside the architectureto answer all preflight-requests from external sources to services using the Traefik-Ingress Controller and the authentication service.
Requests from specific origins can be permitted by configuring them inside the cors-config-map.yaml. By adding the origin address of the external web application under the allow.orgin data, access can be granted to an application. This may look like this:
```
allow.orgin: "http://exampledomain.com"
```
In the same file, allowed methods and headers can be configured in the same way.
Note that CORS may still have to be enabled in the individual deployed service itself, and that this handler only responds to the preflight requests created when sending authenticated requests containing credentials, such as an autorization header.
Configuring this CORS-Handler will only be necessary when using an external frontend that isn't deployed inside the architecture itself. But beware that using anything other than the ClusterIP service type for your deployed frontends will alter the actual origin and will end up triggering CORS.
### Special use-case: WSL
When the architecture is deployed locally, running on WSL, there are a few things that are noteworthy when interacting with the cluster.
Since by default WSL isn't exposed on localhost, in order to make calls to the services deployed inside the architecture the IP address of WSL is needed. To get the IP address of WSL, the following command can be executed from inside Ubuntu:
```
ip addr show dev eth0
```
The shown IP address can then be used to make calls to the cluster from the windows host machine.
Another thing worth mentioning when developing using WSL is that closing the WSL window stops the cluster. So in order to interact with the cluster, a WSL console has to be running.
After closing WSL and reopening it, the deployed services inside the pods might have stopped running. Kubernetes will detect this after a short while and restart the pods. To skip the wait until the automatic restart of the pods, this process can be acellerated, if there are deployment resources created for them, by restarting all the deployments of the cluster manually using following command:
```
kubectl rollout restart deployments -n default
```
the -n tag specifies the namespace of the deployments that should be restarted. When not using the default namespace for your own applications, simply change the parameter.
### Uninstalling the architecture
Should you want to uninstall the architecture, the easiest way to do so is by uninstalling K3s from the system altogether. This can be achieved with the following command:
```
/usr/local/bin/k3s-uninstall.sh
```
This command can also be useful should you run into any issues with the architecture along the way. By simply uninstalling K3s and executing the install script again, the architecture will be reset to it's default settings.
### Troubleshooting
When trying to interact with the cluster you might encounter following error: `The connection to the server localhost:8080 was refused - did you specify the right host or port?`. This is due to the fact that the Kubernetes path variable doesn't persist when the session is closed. If this error occurs, you'll have to set the path-variable for Kubernetes again. This can be done issuing the following command:
```
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
```
If you want the path variable to be set persistently, you can do that by editing the `~/.profile` file and adding the same line as above.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment