# Kubernetes.K3s.installLog *3 VM's provisioned with Ubuntu Server 22.04*
additional lvm configuration ```shell pvdisplay pvcreate /dev/sdb vgdisplay vgcreate longhorn-vg /dev/sdb lvdisplay lvcreate -l 100%FREE -n longhorn-lv longhorn-vg ls /dev/mapper mkfs.ext4 /dev/mapper/longhorn--vg-longhorn--lv #! add "UUID= /mnt/blockstorage ext4 defaults 0 0" to /etc/fstab mkdir /mnt/blockstorage mount -a ```
## K3s cluster On first node (replace `` with the correct value): ```shell curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable local-storage,servicelb --tls-san cat /var/lib/rancher/k3s/server/token kubectl config view --raw ``` Install kube-vip (replace `` and `` with the correct values): ```shell ctr image pull ghcr.io/kube-vip/kube-vip:latest cat << EOF > /var/lib/rancher/k3s/server/manifests/kube-vip.yml $(curl https://kube-vip.io/manifests/rbac.yaml) --- $(ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:latest vip /kube-vip manifest daemonset --interface --address --inCluster --taint --controlplane --services --arp --leaderElection) EOF ``` On subsequent nodes (replace `` and `` with the correct values): ```shell curl -sfL https://get.k3s.io | K3S_URL=https://:6443 K3S_TOKEN= sh -s - server --disable local-storage,servicelb ``` ### 0) Configure automatic updates Install Rancher's [System Upgrade Controller](https://rancher.com/docs/k3s/latest/en/upgrades/automated/): ```shell kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml ``` Apply a [server (master node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Server.yml) ~~and [agent (worker node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Agent.yml)~~ plan: ```shell kubectl apply -f system/UpgradeController/plan-Server.yml # -f system/UpgradeController/plan-Agent.yml ``` ### 1) Secret management *Prereq*: latest `kubeseal` [release](https://github.com/bitnami-labs/sealed-secrets/releases) ##### 1.1) Install Helm Chart See [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets#helm-chart): ```shell helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets helm repo update helm install sealed-secrets-controller -n kube-system sealed-secrets/sealed-secrets ``` Retrieve public/private keys (*store these on a **secure** location!*): ```shell kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml > BitnamiSealedSecrets.masterkey.yml ``` ### 2) Persistent storage #### 2.1) `storageClass` for SMB (CIFS): See https://github.com/kubernetes-csi/csi-driver-smb: ```shell curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/install-driver.sh | bash -s master -- ``` Store credentials in `secret`: ```shell kubectl apply -f storage/csi-driver-smb/sealedSecret-CSIdriverSMB.yml ``` #### 2.2) `flexVolume` for SMB (CIFS): ```shell curl -Ls https://github.com/juliohm1978/kubernetes-cifs-volumedriver/blob/master/install.yaml -o storage/flexVolSMB/daemonSet-flexVolSMB.yml ``` Override drivername to something more sensible (see [storage/flexVolSMB/daemonSet-flexVolSMB.yml](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/storage/flexVolSMB/daemonSet-flexVolSMB.yml)) ```yaml spec: template: spec: containers: - image: juliohm/kubernetes-cifs-volumedriver-installer:2.0 ... env: - name: VENDOR value: mount - name: DRIVER value: smb ... ``` Perform installation: ```shell kubectl apply -f storage/flexVolSMB/daemonSet-flexVolSMB.yml ``` Wait for installation to complete (check logs of all installer-pods), then pause `daemonSet`: ```shell kubectl patch daemonset juliohm-cifs-volumedriver-installer -p '{"spec": {"template": {"spec": {"nodeSelector": {"intentionally-paused": ""}}}}}' ``` Store credentials in `secret`: ```shell kubectl apply -f storage/flexVolSMB/sealedSecret-flexVolSMB.yml ``` #### 2.3) `storageClass` for distributed block storage: See [Longhorn Helm Chart](https://longhorn.io/): ```shell helm repo add longhorn https://charts.longhorn.io && helm repo update helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --values=storage/Longhorn/chart-values.yml ``` Log on to the web interface and delete the default disks on each node (mounted at `/var/lib/longhorn`) and replace them with new disks mounted at `/mnt/blockstorage`. Add additional `storageClass` with backup schedule: ***After** specifying a NFS backup target (syntax: `nfs://servername:/path/to/share`) through Longhorn's dashboard* ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-dailybackup provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: "3" staleReplicaTimeout: "2880" fromBackup: "" recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]' ``` Then make this the new default `storageClass`: ```shell kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' #kubectl delete storageclass longhorn ``` ### 3) Ingress Controller Reconfigure default Traefik configuration: See [Traefik 2.x Helm Chart](https://github.com/traefik/traefik-helm-chart) and [HelmChartConfig](https://docs.k3s.io/helm) ```shell kubectl apply -f ingress/Traefik2.x/helmchartconfig-traefik.yaml ``` ### 4) GitOps ##### 4.1) Install Helm Chart See [ArgoCD](https://argo-cd.readthedocs.io/en/stable/getting_started/#getting-started): ```shell helm repo add argo https://argoproj.github.io/argo-helm helm repo update helm install argo-cd -n argo-cd --create-namespace argo/argo-cd --values system/ArgoCD/chart-values.yml ``` Retrieve initial password: ```shell kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d; echo ``` Login with username `admin` and the initial password, browse to `User Info` and `Update Password`. Create ArgoCD applicationset ```shell kubectl apply -f system/ArgoCD/applicationset-homelab.yml ``` ### 5) Services ##### 5.1) [Adminer](https://www.adminer.org/) (SQL management) ```shell kubectl apply -f services/Adminer/configMap-Adminer.yml kubectl apply -f services/Adminer/deploy-Adminer.yml kubectl apply -f services/Adminer/sealedSecret-Adminer.yml ``` ##### 5.2) [Vaultwarden](https://github.com/dani-garcia/vaultwarden) (password manager) *Requires [mount.cifs](https://linux.die.net/man/8/mount.cifs)' option `nobrl`* ```shell kubectl apply -f services/Bitwarden/deploy-Bitwarden.yml kubectl apply -f services/Bitwarden/sealedSecret-Bitwarden.yml ``` ##### 5.3) [DDclient](https://github.com/linuxserver/docker-ddclient) (dynamic dns) ```shell kubectl apply -f services/DDclient/deploy-DDclient.yml kubectl apply -f services/DDclient/sealedSecret-DDclient.yml ``` ##### 5.4) [DroneCI](https://drone.io/) (contineous delivery) ```shell kubectl apply -f services/DroneCI/deploy-DroneCI.yml kubectl apply -f services/DroneCI/sealedSecret-DroneCI.yml ``` ##### 5.5) [Gitea](https://gitea.io/) (git repository) ```shell kubectl apply -f services/Gitea/deploy-Gitea.yml ``` ##### 5.6) [Gotify](https://gotify.net/) (notifications) ```shell kubectl apply -f services/Gotify/deploy-Gotify.yml ``` ##### 5.7) [Guacamole](https://guacamole.apache.org/doc/gug/guacamole-docker.html) (remote desktop gateway) *Requires specifying a `uid` & `gid` in both the `securityContext` of the MySQL container and the `persistentVolume`* ```shell kubectl apply -f services/Guacamole/deploy-Guacamole.yml kubectl apply -f services/Guacamole/sealedSecret-Guacamole.yml ``` Wait for the included containers to start, then perform the following commands to initialize the database: ```shell kubectl exec -i guacamole- --container guacamole -- /opt/guacamole/bin/initdb.sh --mysql > initdb.sql kubectl exec -i guacamole- --container mysql -- mysql -uguacamole -pguacamole guacamole < initdb.sql kubectl rollout restart deployment guacamole ``` ##### 5.8) [Lighttpd](https://www.lighttpd.net/) (webserver) *Serves various semi-containerized websites; respective webcontent is stored on fileshare* ```shell kubectl apply -f services/Lighttpd/configMap-Lighttpd.yml kubectl apply -f services/Lighttpd/deploy-Lighttpd.yml ``` ##### 5.9) PVR `namespace` (automated media management) *Containers use shared resources to be able to interact with downloaded files* ```shell kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<> --from-literal=password=<> -n pvr kubectl apply -f services/PVR/persistentVolumeClaim-PVR.yml kubectl apply -f services/PVR/storageClass-PVR.yml ``` ###### 5.9.1) [Overseerr](https://overseerr.dev/) (request management) ```shell kubectl apply -f services/PVR/deploy-Overseerr.yml ``` ###### 5.9.2) [Plex](https://www.plex.tv/) (media library) *Due to usage of symlinks, partially incompatible with SMB-share-backed storage* ```shell kubectl apply -f services/PVR/deploy-Plex.yml ``` After deploying, Plex server needs to be *claimed* (=assigned to Plex-account): ```shell kubectl get endpoints Plex -n PVR ``` Browse to the respective IP address (http://:32440/web) and follow instructions. ###### 5.9.3) [Prowlarr](https://github.com/Prowlarr/Prowlarr) (indexer management) ```shell kubectl apply -f services/PVR/deploy-Prowlarr.yml ``` ###### 5.9.4) [Radarr](https://radarr.video/) (movie management) ```shell kubectl apply -f services/PVR/deploy-Radarr.yml ``` ###### 5.9.5) [Readarr](https://readarr.com/) (book management) ```shell kubectl apply -f services/PVR/deploy-Readarr.yml ``` ###### 5.9.6) [SABnzbd](https://sabnzbd.org/) (download client) ```shell kubectl apply -f services/PVR/deploy-SABnzbd.yml ``` ###### 5.9.7) [Sonarr](https://sonarr.tv/) (tv management) ```shell kubectl apply -f services/PVR/deploy-Sonarr.yml ``` ##### 5.10) [Shaarli](https://github.com/shaarli/Shaarli) (bookmarks/notes) ```shell kubectl apply -f services/Shaarli/deploy-Shaarli.yml ``` ##### 5.11) [Traefik-Certs-Dumper](https://github.com/ldez/traefik-certs-dumper) (certificate tooling) ```shell kubectl apply -f services/TraefikCertsDumper/deploy-TraefikCertsDumper.yml ``` ### 6) Miscellaneous *Various notes/useful links* * Replacement for [not-yet-deprecated](https://github.com/kubernetes/kubectl/issues/151) `kubectl get all -A`: kubectl get $(kubectl api-resources --verbs=list -o name | paste -sd, -) --ignore-not-found --all-namespaces * `DaemonSet` to configure nodes' **sysctl** `fs.inotify.max-user-watches`: kubectl apply -f system/InotifyMaxWatchers/daemonSet-InotifyMaxWatchers.yml * Debug DNS lookups within the cluster: kubectl run -it --rm dnsutils --restart=Never --image=gcr.io/kubernetes-e2e-test-images/dnsutils -- nslookup [-debug] [fqdn] or kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn] * Delete namespaces stuck in `Terminating` state: *First* check whether there are any resources still present; preventing the namespace from being deleted: kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -n Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`) If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes): kubectl get namespace -o json | jq -j '.spec.finalizers=null' > tmp.json kubectl replace --raw "/api/v1/namespaces//finalize" -f ./tmp.json rm ./tmp.json