Kubernetes.K3s.installLog/README.md

12 KiB

TODO: Files with sensitive data; migrate to SealedSecret

# line ??: services/TfState/deploy-TfState.yml
# line ??: services/Mastodon/deploy-Mastodon.yml
# line ??: services/PVR/deploy-SpotWeb.yml

Kubernetes.K3s.installLog

3 VM's provisioned with Ubuntu Server 18.04

additional lvm configuration
pvdisplay
pvcreate /dev/sdb
vgdisplay
vgcreate longhorn-vg /dev/sdb
lvdisplay
lvcreate -l 100%FREE -n longhorn-lv longhorn-vg
ls /dev/mapper
mkfs.ext4 /dev/mapper/longhorn--vg-longhorn--lv
#! add "UUID=<uuid> /mnt/blockstorage ext4 defaults 0 0" to /etc/fstab
mkdir /mnt/blockstorage
mount -a

K3s cluster

On first node:

curl -sfL https://get.k3s.io | sh -s - --disable local-path,traefik
cat /var/lib/rancher/k3s/server/token
kubectl config view --raw

On subsequent nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://<fqdn or ip>:6443 K3S_TOKEN=<value from master> sh -

0) Configure automatic updates

Install Rancher's System Upgrade Controller:

kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml

Apply a server (master node) and agent (worker node) plan:

kubectl apply -f system/UpgradeController/plan-Server.yml -f system/UpgradeController/plan-Agent.yml

1) Persistent storage

1.1) storageClass for SMB (CIFS):

See https://github.com/kubernetes-csi/csi-driver-smb:

curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/install-driver.sh | bash -s master --

Store credentials in secret:

kubectl create secret generic smb-credentials --from-literal username=<<omitted>> --from-literal domain=<<omitted>> --from-literal password=<<omitted>>

1.2) flexVolume for SMB (CIFS):

curl -Ls https://github.com/juliohm1978/kubernetes-cifs-volumedriver/blob/master/install.yaml -o storage/flexVolSMB/daemonSet-flexVolSMB.yml

Override drivername to something more sensible (see storage/flexVolSMB/daemonSet-flexVolSMB.yml)

spec:
  template:
    spec:
      containers:
        - image: juliohm/kubernetes-cifs-volumedriver-installer:2.0
          ...
          env:
            - name: VENDOR
              value: mount
            - name: DRIVER
              value: smb
          ...

Perform installation:

kubectl apply -f storage/flexVolSMB/daemonSet-flexVolSMB.yml

Wait for installation to complete (check logs of all installer-pods), then delete daemonSet:

kubectl delete -f storage/flexVolSMB/daemonSet-flexVolSMB.yml

Store credentials in secret:

kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<<omitted>> --from-literal=password=<<omitted>>

1.3) storageClass for distributed block storage:

See Longhorn Helm Chart:

kubectl create namespace longhorn-system
helm repo add longhorn https://charts.longhorn.io
helm install longhorn longhorn/longhorn --namespace longhorn-system --values=storage/Longhorn/chart-values.yml

Expose Longhorn's dashboard through IngressRoute:

kubectl apply -f storage/Longhorn/ingressRoute-Longhorn.yml

Log on to the web interface and delete the default disks on each node (mounted at /var/lib/longhorn) and replace them with new disks mounted at /mnt/blockstorage.

Add additional storageClass with backup schedule: After specifying a NFS backup target (syntax: nfs://servername:/path/to/share) through Longhorn's dashboard

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn-dailybackup
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "2880"
  fromBackup: ""
  recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]'

Then make this the new default storageClass:

kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
#kubectl delete storageclass longhorn

2) Ingress Controller

2.1) Create configMap, secret and persistentVolumeClaim

The configMap contains Traefik's static and dynamic config:

kubectl apply -f ingress/Traefik2.x/configMap-Traefik.yml

The secret contains credentials for Cloudflare's API:

kubectl create secret generic traefik-cloudflare --from-literal=CF_API_EMAIL=<<omitted>> --from-literal=CF_API_KEY=<<omitted>> --namespace kube-system

The persistentVolumeClaim will contain /data/acme.json (referenced as existingClaim):

kubectl apply -f ingress/Traefik2.x/persistentVolumeClaim-Traefik.yml
2.2) Install Helm Chart

See Traefik 2.x Helm Chart:

helm repo add traefik https://containous.github.io/traefik-helm-chart
helm repo update
helm install traefik traefik/traefik --namespace kube-system --values=ingress/Traefik2.x/chart-values.yml
2.3) Replace IngressRoute for Traefik's dashboard:
kubectl apply -f ingress/Traefik2.x/ingressRoute-Traefik.yaml
kubectl delete ingressroute traefik-dashboard --namespace kube-system

3) Secret management

Prereq: latest kubeseal release

3.1) Install Helm Chart

See Bitnami Sealed Secrets:

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm repo update
helm install sealed-secrets-controller -n kube-system sealed-secrets/sealed-secrets

Fix servicename (remove name: http - see #502):

kubectl edit service -n kube-system sealed-secrets-controller

4) Services

4.1) Adminer (SQL management)
kubectl apply -f services/Adminer/configMap-Adminer.yml
kubectl apply -f services/Adminer/deploy-Adminer.yml
kubectl apply -f services/Adminer/sealedSecret-Adminer.yml
4.2) Vaultwarden (password manager)

Requires mount.cifs' option nobrl

kubectl apply -f services/Bitwarden/deploy-Bitwarden.yml
kubectl apply -f services/Bitwarden/sealedSecret-Bitwarden.yml
4.3) DDclient (dynamic dns)
kubectl apply -f services/DDclient/deploy-DDclient.yml
kubectl apply -f services/DDclient/sealedSecret-DDclient.yml
4.4) DroneCI (contineous delivery)
kubectl apply -f services/DroneCI/deploy-DroneCI.yml
kubectl apply -f services/DroneCI/sealedSecret-DroneCI.yml
4.5) Gitea (git repository)
kubectl apply -f services/Gitea/deploy-Gitea.yml
4.6) Gotify (notifications)
kubectl apply -f services/Gotify/deploy-Gotify.yml
4.7) Guacamole (remote desktop gateway)

Requires specifying a uid & gid in both the securityContext of the MySQL container and the persistentVolume

kubectl apply -f services/Guacamole/deploy-Guacamole.yml
kubectl apply -f services/Guacamole/sealedSecret-Guacamole.yml

Wait for the included containers to start, then perform the following commands to initialize the database:

kubectl exec -i guacamole-<pod-id> --container guacamole -- /opt/guacamole/bin/initdb.sh --mysql > initdb.sql
kubectl exec -i guacamole-<pod-id> --container mysql -- mysql -uguacamole -pguacamole guacamole < initdb.sql
kubectl rollout restart deployment guacamole
4.8) Lighttpd (webserver)

Serves various semi-containerized websites; respective webcontent is stored on fileshare

kubectl apply -f services/Lighttpd/configMap-Lighttpd.yml
kubectl apply -f services/Lighttpd/deploy-Lighttpd.yml
kubectl apply -f services/Lighttpd/cronJob-Spotweb.yml
4.9) PVR namespace (automated media management)

Containers use shared resources to be able to interact with downloaded files

kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<<omitted>> --from-literal=password=<<omitted>> -n pvr
kubectl apply -f services/PVR/persistentVolumeClaim-PVR.yml
kubectl apply -f services/PVR/storageClass-PVR.yml
4.9.1) Overseerr (request management)
kubectl apply -f services/PVR/deploy-Overseerr.yml
4.9.2) Plex (media library)

Due to usage of symlinks, partially incompatible with SMB-share-backed storage

kubectl apply -f services/PVR/deploy-Plex.yml

After deploying, Plex server needs to be claimed (=assigned to Plex-account):

kubectl get endpoints Plex -n PVR

Browse to the respective IP address (http://:32400/web) and follow instructions.

4.9.3) Prowlarr (indexer management)
kubectl apply -f services/PVR/deploy-Prowlarr.yml
4.9.4) Radarr (movie management)
kubectl apply -f services/PVR/deploy-Radarr.yml
4.9.5) Readarr (book management)
kubectl apply -f services/PVR/deploy-Readarr.yml
4.9.6) SABnzbd (download client)
kubectl apply -f services/PVR/deploy-SABnzbd.yml
4.9.7) Sonarr (tv management)
kubectl apply -f services/PVR/deploy-Sonarr.yml
4.10) Shaarli (bookmarks/notes)
kubectl apply -f services/Shaarli/deploy-Shaarli.yml
4.11) Traefik-Certs-Dumper (certificate tooling)
kubectl apply -f services/TraefikCertsDumper/deploy-TraefikCertsDumper.yml
4.12) Unifi-Controller (wlan AP management)
kubectl apply -f services/Unifi/deploy-Unifi.yml

Change STUN port to non-default:

kubectl exec --namespace unifi -it unifi-<uuid> -- /bin/bash
sed -e 's/# unifi.stun.port=3478/unifi.stun.port=3479/' -i /data/system.properties
exit
kubectl rollout restart deployment --namespace unifi unifi

Update STUN url on devices: doesn't seem to work

ssh <username>@<ipaddress>
sed -e 's|stun://<ipaddress>|stun://<ipaddress>:3479|' -i /etc/persistent/cfg/mgmt

5) Miscellaneous

Various notes/useful links

  • Replacement for not-yet-deprecated kubectl get all -A:

    kubectl get $(kubectl api-resources --verbs=list -o name | paste -sd, -) --ignore-not-found --all-namespaces
    
  • DaemonSet to configure nodes' sysctl fs.inotify.max-user-watches:

    kubectl apply -f system/InotifyMaxWatchers/daemonSet-InotifyMaxWatchers.yml
    
  • Debug DNS lookups within the cluster:

    kubectl run -it --rm dnsutils --restart=Never --image=gcr.io/kubernetes-e2e-test-images/dnsutils -- nslookup [-debug] [fqdn]
    

    or

    kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn]
    
  • Delete namespaces stuck in Terminating state:

    kubectl get namespace <name> -o json | jq -j '.spec.finalizers=null' > tmp.json
    kubectl replace --raw "/api/v1/namespaces/<name>/finalize" -f ./tmp.json
    rm ./tmp.json