Refactor Authelia,Longhorn,Traefik; Enable ingress middlewares; Update docs
This commit is contained in:
77
README.md
77
README.md
@ -1,10 +1,5 @@
|
||||
*TODO: Files with sensitive data; migrate to SealedSecret*
|
||||
```
|
||||
# line ??: services/Mastodon/deploy-Mastodon.yml
|
||||
```
|
||||
|
||||
# Kubernetes.K3s.installLog
|
||||
*3 VM's provisioned with Ubuntu Server 18.04*
|
||||
*3 VM's provisioned with Ubuntu Server 22.04*
|
||||
<details><summary>additional lvm configuration</summary>
|
||||
|
||||
```shell
|
||||
@ -117,14 +112,10 @@ kubectl apply -f storage/flexVolSMB/sealedSecret-flexVolSMB.yml
|
||||
#### 2.3) `storageClass` for distributed block storage:
|
||||
See [Longhorn Helm Chart](https://longhorn.io/):
|
||||
```shell
|
||||
kubectl create namespace longhorn-system
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm install longhorn longhorn/longhorn --namespace longhorn-system --values=storage/Longhorn/chart-values.yml
|
||||
```
|
||||
Expose Longhorn's dashboard through `IngressRoute`:
|
||||
```shell
|
||||
kubectl apply -f storage/Longhorn/ingressRoute-Longhorn.yml
|
||||
helm repo add longhorn https://charts.longhorn.io && helm repo update
|
||||
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --values=storage/Longhorn/chart-values.yml
|
||||
```
|
||||
|
||||
Log on to the web interface and delete the default disks on each node (mounted at `/var/lib/longhorn`) and replace them with new disks mounted at `/mnt/blockstorage`.
|
||||
|
||||
Add additional `storageClass` with backup schedule:
|
||||
@ -149,32 +140,10 @@ kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":
|
||||
```
|
||||
|
||||
### 3) Ingress Controller
|
||||
##### 3.1) Create `configMap`, `secret` and `persistentVolumeClaim`
|
||||
The `configMap` contains Traefik's static and dynamic config:
|
||||
Reconfigure default Traefik configuration:
|
||||
See [Traefik 2.x Helm Chart](https://github.com/traefik/traefik-helm-chart) and [HelmChartConfig](https://docs.k3s.io/helm)
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/configMap-Traefik.yml
|
||||
```
|
||||
|
||||
The `secret` contains credentials for Cloudflare's API:
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/sealedSecret-Traefik-Cloudflare.yml
|
||||
```
|
||||
|
||||
The `persistentVolumeClaim` will contain `/data/acme.json` (referenced as `existingClaim`):
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/persistentVolumeClaim-Traefik.yml
|
||||
```
|
||||
##### 3.2) Install Helm Chart
|
||||
See [Traefik 2.x Helm Chart](https://github.com/containous/traefik-helm-chart):
|
||||
```shell
|
||||
helm repo add traefik https://containous.github.io/traefik-helm-chart
|
||||
helm repo update
|
||||
helm install traefik traefik/traefik --namespace kube-system --values=ingress/Traefik2.x/chart-values.yml
|
||||
```
|
||||
##### 3.3) Replace `IngressRoute` for Traefik's dashboard:
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/ingressRoute-Traefik.yaml
|
||||
kubectl delete ingressroute traefik-dashboard --namespace kube-system
|
||||
kubectl apply -f ingress/Traefik2.x/helmchartconfig-traefik.yaml
|
||||
```
|
||||
|
||||
### 4) GitOps
|
||||
@ -292,31 +261,11 @@ kubectl apply -f services/PVR/deploy-Sonarr.yml
|
||||
```shell
|
||||
kubectl apply -f services/Shaarli/deploy-Shaarli.yml
|
||||
```
|
||||
##### 5.11) [Terraform backend](https://www.terraform.io/language/settings/backends/pg) <small>(supporting database)</small>
|
||||
```shell
|
||||
kubectl apply -f services/TfState/deploy-TfState.yml
|
||||
kubectl apply -f services/TfState/sealedSecret-TfState.yml
|
||||
```
|
||||
##### 5.12) [Traefik-Certs-Dumper](https://github.com/ldez/traefik-certs-dumper) <small>(certificate tooling)</small>
|
||||
|
||||
##### 5.11) [Traefik-Certs-Dumper](https://github.com/ldez/traefik-certs-dumper) <small>(certificate tooling)</small>
|
||||
```shell
|
||||
kubectl apply -f services/TraefikCertsDumper/deploy-TraefikCertsDumper.yml
|
||||
```
|
||||
##### 5.13) [Unifi-Controller]() <small>(network infrastructure management)</small>
|
||||
```shell
|
||||
kubectl apply -f services/Unifi/deploy-Unifi.yml
|
||||
```
|
||||
*Change STUN port to non-default:*
|
||||
```shell
|
||||
kubectl exec --namespace unifi -it unifi-<uuid> -- /bin/bash
|
||||
sed -e 's/# unifi.stun.port=3478/unifi.stun.port=3479/' -i /data/system.properties
|
||||
exit
|
||||
kubectl rollout restart deployment --namespace unifi unifi
|
||||
```
|
||||
*Update STUN url on devices:* <small>doesn't seem to work</small>
|
||||
```shell
|
||||
ssh <username>@<ipaddress>
|
||||
sed -e 's|stun://<ipaddress>|stun://<ipaddress>:3479|' -i /etc/persistent/cfg/mgmt
|
||||
```
|
||||
|
||||
### 6) Miscellaneous
|
||||
*Various notes/useful links*
|
||||
@ -336,14 +285,14 @@ sed -e 's|stun://<ipaddress>|stun://<ipaddress>:3479|' -i /etc/persistent/cfg/mg
|
||||
or
|
||||
|
||||
kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn]
|
||||
* Delete namespaces stuck in `Terminating` state:
|
||||
*First* check whether there are any resources still present; preventing the namespace from being deleted:
|
||||
* Delete namespaces stuck in `Terminating` state:
|
||||
*First* check whether there are any resources still present; preventing the namespace from being deleted:
|
||||
|
||||
kubectl api-resources --verbs=list --namespaced -o name \
|
||||
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
|
||||
|
||||
Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`)
|
||||
If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes):
|
||||
Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`)
|
||||
If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes):
|
||||
|
||||
kubectl get namespace <namespace> -o json | jq -j '.spec.finalizers=null' > tmp.json
|
||||
kubectl replace --raw "/api/v1/namespaces/<namespace>/finalize" -f ./tmp.json
|
||||
|
Reference in New Issue
Block a user