Housekeeping
This commit is contained in:
parent
75801a3db6
commit
6852cd28e4
93
README.md
93
README.md
@ -7,7 +7,7 @@
|
||||
*3 VM's provisioned with Ubuntu Server 18.04*
|
||||
<details><summary>additional lvm configuration</summary>
|
||||
|
||||
```code
|
||||
```shell
|
||||
pvdisplay
|
||||
pvcreate /dev/sdb
|
||||
vgdisplay
|
||||
@ -25,23 +25,23 @@ mount -a
|
||||
|
||||
## K3s cluster
|
||||
On first node:
|
||||
```
|
||||
```shell
|
||||
curl -sfL https://get.k3s.io | sh -s - --disable local-path,traefik
|
||||
cat /var/lib/rancher/k3s/server/token
|
||||
kubectl config view --raw
|
||||
```
|
||||
On subsequent nodes:
|
||||
```
|
||||
```shell
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://<fqdn or ip>:6443 K3S_TOKEN=<value from master> sh -
|
||||
```
|
||||
|
||||
### 0) Configure automatic updates
|
||||
Install Rancher's [System Upgrade Controller](https://rancher.com/docs/k3s/latest/en/upgrades/automated/):
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/download/v0.6.2/system-upgrade-controller.yaml
|
||||
```
|
||||
Apply a [server (master node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Server.yml) and [agent (worker node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Agent.yml) plan:
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f system/UpgradeController/plan-Server.yml -f system/UpgradeController/plan-Agent.yml
|
||||
```
|
||||
|
||||
@ -49,20 +49,20 @@ kubectl apply -f system/UpgradeController/plan-Server.yml -f system/UpgradeContr
|
||||
|
||||
#### 1.1) `storageClass` for SMB (CIFS):
|
||||
See https://github.com/kubernetes-csi/csi-driver-smb:
|
||||
```
|
||||
```shell
|
||||
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/install-driver.sh | bash -s master --
|
||||
```
|
||||
Store credentials in `secret`:
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic smb-credentials --from-literal username=<<omitted>> --from-literal domain=<<omitted>> --from-literal password=<<omitted>>
|
||||
```
|
||||
|
||||
#### 1.2) `flexVolume` for SMB (CIFS):
|
||||
```
|
||||
```shell
|
||||
curl -Ls https://github.com/juliohm1978/kubernetes-cifs-volumedriver/blob/master/install.yaml -o storage/flexVolSMB/daemonSet-flexVolSMB.yml
|
||||
```
|
||||
Override drivername to something more sensible (see [storage/flexVolSMB/daemonSet-flexVolSMB.yml](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/storage/flexVolSMB/daemonSet-flexVolSMB.yml))
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
@ -77,32 +77,32 @@ spec:
|
||||
...
|
||||
```
|
||||
Perform installation:
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f storage/flexVolSMB/daemonSet-flexVolSMB.yml
|
||||
```
|
||||
Wait for installation to complete (check logs of all installer-pods), then delete `daemonSet`:
|
||||
```
|
||||
```shell
|
||||
kubectl delete -f storage/flexVolSMB/daemonSet-flexVolSMB.yml
|
||||
```
|
||||
Store credentials in `secret`:
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<<omitted>> --from-literal=password=<<omitted>>
|
||||
```
|
||||
|
||||
#### 1.3) `storageClass` for distributed block storage:
|
||||
See [Longhorn Helm Chart](https://longhorn.io/):
|
||||
```
|
||||
```shell
|
||||
kubectl create namespace longhorn-system
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm install longhorn longhorn/longhorn --namespace longhorn-system --values=storage/Longhorn/chart-values.yml
|
||||
```
|
||||
Expose Longhorn's dashboard through `IngressRoute`:
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f storage/Longhorn/ingressRoute-Longhorn.yml
|
||||
```
|
||||
Add additional `storageClass` with backup schedule:
|
||||
***After** specifying a NFS backup target (syntax: `nfs://servername:/path/to/share`) through Longhorn's dashboard*
|
||||
```
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
@ -116,37 +116,36 @@ parameters:
|
||||
recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]'
|
||||
```
|
||||
Then make this the new default `storageClass`:
|
||||
```
|
||||
```shell
|
||||
kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
#kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
#kubectl delete storageclass longhorn
|
||||
```
|
||||
|
||||
### 2) Ingress Controller
|
||||
##### 2.1) Create `configMap`, `secret` and `persistentVolumeClaim`
|
||||
The `configMap` contains Traefik's static and dynamic config:
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/configMap-Traefik.yml
|
||||
```
|
||||
|
||||
The `secret` contains credentials for Cloudflare's API:
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic traefik-cloudflare --from-literal=CF_API_EMAIL=<<omitted>> --from-literal=CF_API_KEY=<<omitted>> --namespace kube-system
|
||||
```
|
||||
|
||||
The `persistentVolumeClaim` will contain `/data/acme.json` (referenced as `existingClaim`):
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/persistentVolumeClaim-Traefik.yml
|
||||
```
|
||||
##### 2.2) Install Helm Chart
|
||||
See [Traefik 2.x Helm Chart](https://github.com/containous/traefik-helm-chart):
|
||||
```
|
||||
```shell
|
||||
helm repo add traefik https://containous.github.io/traefik-helm-chart
|
||||
helm repo update
|
||||
helm install traefik traefik/traefik --namespace kube-system --values=ingress/Traefik2.x/chart-values.yml
|
||||
```
|
||||
##### 2.3) Replace `IngressRoute` for Traefik's dashboard:
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f ingress/Traefik2.x/ingressRoute-Traefik.yaml
|
||||
kubectl delete ingressroute traefik-dashboard --namespace kube-system
|
||||
```
|
||||
@ -155,14 +154,14 @@ kubectl delete ingressroute traefik-dashboard --namespace kube-system
|
||||
*Perform these steps **after** configuring persistent storage **and** ingress*
|
||||
##### 3.1) Create `persistentVolume` and `ingressRoute`
|
||||
*Requires specifying a `uid` & `gid` in the flexvolSMB-`persistentVolume`*
|
||||
```
|
||||
```shell
|
||||
kubectl create namespace vault
|
||||
kubectl apply -f services/Vault/persistentVolume-Vault.yml
|
||||
kubectl apply -f services/Vault/ingressRoute-Vault.yml
|
||||
```
|
||||
##### 3.2) Install Helm Chart
|
||||
See [HashiCorp Vault](https://www.vaultproject.io/docs/platform/k8s/helm/run):
|
||||
```
|
||||
```shell
|
||||
helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
helm repo update
|
||||
helm install vault hashicorp/vault --namespace vault --values=services/Vault/chart-values.yml
|
||||
@ -192,7 +191,7 @@ vault secrets enable -path=secret -version=2 kv
|
||||
```
|
||||
### 4) Services
|
||||
##### 4.1) [Adminer](https://www.adminer.org/) <small>(SQL management)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Adminer/configMap-Adminer.yml
|
||||
kubectl apply -f services/Adminer/deploy-Adminer.yml
|
||||
```
|
||||
@ -209,7 +208,7 @@ vault policy write adminer /home/vault/app-policy.hcl
|
||||
```
|
||||
##### 4.2) [Bitwarden_rs](https://github.com/dani-garcia/bitwarden_rs) <small>(password manager)</small>
|
||||
*Requires [mount.cifs](https://linux.die.net/man/8/mount.cifs)' option `nobrl`*
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Bitwarden/deploy-Bitwarden.yml
|
||||
```
|
||||
Vault configuration:
|
||||
@ -226,7 +225,7 @@ vault write auth/kubernetes/role/bitwarden \
|
||||
vault policy write bitwarden /home/vault/app-policy.hcl
|
||||
```
|
||||
##### 4.3) [DroneCI](https://drone.io/) <small>(contineous delivery)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/DroneCI/deploy-DroneCI.yml
|
||||
```
|
||||
Vault configuration:
|
||||
@ -243,21 +242,21 @@ vault write auth/kubernetes/role/drone \
|
||||
vault policy write drone /home/vault/app-policy.hcl
|
||||
```
|
||||
##### 4.4) [Gitea](https://gitea.io/) <small>(git repository)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Gitea/deploy-Gitea.yml
|
||||
```
|
||||
##### 4.5) [Gotify](https://gotify.net/) <small>(notifications)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Gotify/deploy-Gotify.yml
|
||||
```
|
||||
##### 4.6) [Guacamole](https://guacamole.apache.org/doc/gug/guacamole-docker.html) <small>(remote desktop gateway)</small>
|
||||
*Requires specifying a `uid` & `gid` in both the `securityContext` of the MySQL container and the `persistentVolume`*
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Guacamole/configMap-Guacamole.yml
|
||||
kubectl apply -f services/Guacamole/deploy-Guacamole.yml
|
||||
```
|
||||
Wait for the included containers to start, then perform the following commands to initialize the database:
|
||||
```
|
||||
```shell
|
||||
kubectl exec -i guacamole-<pod-id> --container guacamole -- /opt/guacamole/bin/initdb.sh --mysql > initdb.sql
|
||||
kubectl exec -i guacamole-<pod-id> --container mysql -- mysql -uguacamole -pguacamole guacamole < initdb.sql
|
||||
kubectl rollout restart deployment guacamole
|
||||
@ -265,81 +264,81 @@ kubectl rollout restart deployment guacamole
|
||||
|
||||
##### 4.7) [Lighttpd](https://www.lighttpd.net/) <small>(webserver)</small>
|
||||
*Serves various semi-containerized websites; respective webcontent is stored on fileshare*
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Lighttpd/configMap-Lighttpd.yml
|
||||
kubectl apply -f services/Lighttpd/deploy-Lighttpd.yml
|
||||
kubectl apply -f services/Lighttpd/cronJob-Spotweb.yml
|
||||
```
|
||||
##### 4.8) [Matrix]() <small>(federated chat)</small>
|
||||
*WIP*
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Matrix/configMap-Matrix.yml
|
||||
kubectl apply -f services/Matrix/middleware-Matrix.yml
|
||||
kubectl apply -f services/Matrix/deploy-Matrix.yml
|
||||
```
|
||||
##### 4.9) PVR `namespace` <small>(automated media management)</small>
|
||||
*Containers use shared resources to be able to interact with downloaded files*
|
||||
```
|
||||
```shell
|
||||
kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<<omitted>> --from-literal=password=<<omitted>> -n pvr
|
||||
kubectl apply -f services/PVR/persistentVolumeClaim-PVR.yml
|
||||
kubectl apply -f services/PVR/storageClass-PVR.yml
|
||||
```
|
||||
###### 4.9.1) [NZBHydra](https://github.com/theotherp/nzbhydra2) <small>(index aggregator)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-NZBHydra.yml
|
||||
```
|
||||
###### 4.9.2) [Plex](https://www.plex.tv/) <small>(media library)</small>
|
||||
*Due to usage of symlinks, partially incompatible with SMB-share-backed storage*
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-Plex.yml
|
||||
```
|
||||
After deploying, Plex server needs to be *claimed* (=assigned to Plex-account):
|
||||
```
|
||||
```shell
|
||||
kubectl get endpoints Plex -n PVR
|
||||
```
|
||||
Browse to the respective IP address (http://<nodeipaddress>:32400/web) and follow instructions.
|
||||
###### 4.9.3) [Radarr](https://radarr.video/) <small>(movie management)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-Radarr.yml
|
||||
```
|
||||
###### 4.9.4) [Readarr](https://readarr.com/) <small>(book management)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-Readarr.yml
|
||||
```
|
||||
###### 4.9.5) [SABnzbd](https://sabnzbd.org/) <small>(download client)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-SABnzbd.yml
|
||||
```
|
||||
###### 4.9.6) [Sonarr](https://sonarr.tv/) <small>(tv management)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/PVR/deploy-Sonarr.yml
|
||||
```
|
||||
|
||||
##### 4.10) [Shaarli](https://github.com/shaarli/Shaarli) <small>(bookmarks/notes)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Shaarli/deploy-Shaarli.yml
|
||||
```
|
||||
##### 4.11) [Theia](https://theia-ide.org/) <small>(web IDE)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Theia/deploy-Theia.yml
|
||||
```
|
||||
##### 4.12) [Traefik-Certs-Dumper](https://github.com/ldez/traefik-certs-dumper) <small>(certificate tooling)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/TraefikCertsDumper/deploy-TraefikCertsDumper.yml
|
||||
```
|
||||
##### 4.13) [Unifi-Controller]() <small>(wlan AP management)</small>
|
||||
```
|
||||
```shell
|
||||
kubectl apply -f services/Unifi/deploy-Unifi.yml
|
||||
```
|
||||
*Change STUN port to non-default:*
|
||||
```
|
||||
```shell
|
||||
kubectl exec --namespace unifi -it unifi-<uuid> -- /bin/bash
|
||||
sed -e 's/# unifi.stun.port=3478/unifi.stun.port=3479/' -i /data/system.properties
|
||||
exit
|
||||
kubectl rollout restart deployment --namespace unifi unifi
|
||||
```
|
||||
*Update STUN url on devices:* <small>doesn't seem to work</small>
|
||||
```
|
||||
```shell
|
||||
ssh <username>@<ipaddress>
|
||||
sed -e 's|stun://<ipaddress>|stun://<ipaddress>:3479|' -i /etc/persistent/cfg/mgmt
|
||||
```
|
||||
|
@ -1,6 +1,6 @@
|
||||
image:
|
||||
name: bv11-cr01.bessems.eu/proxy/library/traefik
|
||||
# tag: 2.3.2
|
||||
tag: 2.3.7
|
||||
|
||||
ports:
|
||||
rtmp:
|
||||
|
Loading…
Reference in New Issue
Block a user