276 lines
11 KiB
Markdown
276 lines
11 KiB
Markdown
# Kubernetes.K3s.installLog
|
|
*3 VM's provisioned with Ubuntu Server 22.04*
|
|
<details><summary>additional lvm configuration</summary>
|
|
|
|
```shell
|
|
pvdisplay
|
|
pvcreate /dev/sdb
|
|
vgdisplay
|
|
vgcreate longhorn-vg /dev/sdb
|
|
lvdisplay
|
|
lvcreate -l 100%FREE -n longhorn-lv longhorn-vg
|
|
ls /dev/mapper
|
|
mkfs.ext4 /dev/mapper/longhorn--vg-longhorn--lv
|
|
#! add "UUID=<uuid> /mnt/blockstorage ext4 defaults 0 0" to /etc/fstab
|
|
mkdir /mnt/blockstorage
|
|
mount -a
|
|
```
|
|
|
|
</details>
|
|
|
|
## K3s cluster
|
|
On first node (replace `<floating ip>` with the correct value):
|
|
```shell
|
|
curl -sfL https://get.k3s.io | sh -s - server --cluster-init --disable local-storage,servicelb --tls-san <floating ip>
|
|
cat /var/lib/rancher/k3s/server/token
|
|
kubectl config view --raw
|
|
```
|
|
Install kube-vip (replace `<interface name>` and `<floating ip>` with the correct values):
|
|
```shell
|
|
ctr image pull ghcr.io/kube-vip/kube-vip:latest
|
|
cat << EOF > /var/lib/rancher/k3s/server/manifests/kube-vip.yml
|
|
$(curl https://kube-vip.io/manifests/rbac.yaml)
|
|
---
|
|
$(ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:latest vip /kube-vip manifest daemonset --interface <interface name> --address <floating ip> --inCluster --taint --controlplane --services --arp --leaderElection)
|
|
EOF
|
|
```
|
|
On subsequent nodes (replace `<floating ip>` and `<value from master>` with the correct values):
|
|
```shell
|
|
curl -sfL https://get.k3s.io | K3S_URL=https://<floating ip>:6443 K3S_TOKEN=<value from master> sh -s - server --disable local-storage,servicelb
|
|
```
|
|
|
|
### 0) Configure automatic updates
|
|
Install Rancher's [System Upgrade Controller](https://rancher.com/docs/k3s/latest/en/upgrades/automated/):
|
|
```shell
|
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
|
```
|
|
Apply a [server (master node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Server.yml) ~~and [agent (worker node)](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/system/UpgradeController/plan-Agent.yml)~~ plan:
|
|
```shell
|
|
kubectl apply -f system/UpgradeController/plan-Server.yml # -f system/UpgradeController/plan-Agent.yml
|
|
```
|
|
|
|
### 1) Secret management
|
|
*Prereq*: latest `kubeseal` [release](https://github.com/bitnami-labs/sealed-secrets/releases)
|
|
|
|
##### 1.1) Install Helm Chart
|
|
See [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets#helm-chart):
|
|
```shell
|
|
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
|
helm repo update
|
|
helm install sealed-secrets-controller -n kube-system sealed-secrets/sealed-secrets
|
|
```
|
|
|
|
Retrieve public/private keys (*store these on a **secure** location!*):
|
|
```shell
|
|
kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml > BitnamiSealedSecrets.masterkey.yml
|
|
```
|
|
|
|
### 2) Persistent storage
|
|
|
|
#### 2.1) `storageClass` for SMB (CIFS):
|
|
See https://github.com/kubernetes-csi/csi-driver-smb:
|
|
```shell
|
|
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/install-driver.sh | bash -s master --
|
|
```
|
|
Store credentials in `secret`:
|
|
```shell
|
|
kubectl apply -f storage/csi-driver-smb/sealedSecret-CSIdriverSMB.yml
|
|
```
|
|
|
|
#### 2.2) `flexVolume` for SMB (CIFS):
|
|
```shell
|
|
curl -Ls https://github.com/juliohm1978/kubernetes-cifs-volumedriver/blob/master/install.yaml -o storage/flexVolSMB/daemonSet-flexVolSMB.yml
|
|
```
|
|
Override drivername to something more sensible (see [storage/flexVolSMB/daemonSet-flexVolSMB.yml](https://code.spamasaurus.com/djpbessems/Kubernetes.K3s.installLog/src/branch/master/storage/flexVolSMB/daemonSet-flexVolSMB.yml))
|
|
```yaml
|
|
spec:
|
|
template:
|
|
spec:
|
|
containers:
|
|
- image: juliohm/kubernetes-cifs-volumedriver-installer:2.0
|
|
...
|
|
env:
|
|
- name: VENDOR
|
|
value: mount
|
|
- name: DRIVER
|
|
value: smb
|
|
...
|
|
```
|
|
Perform installation:
|
|
```shell
|
|
kubectl apply -f storage/flexVolSMB/daemonSet-flexVolSMB.yml
|
|
```
|
|
Wait for installation to complete (check logs of all installer-pods), then pause `daemonSet`:
|
|
```shell
|
|
kubectl patch daemonset juliohm-cifs-volumedriver-installer -p '{"spec": {"template": {"spec": {"nodeSelector": {"intentionally-paused": ""}}}}}'
|
|
```
|
|
Store credentials in `secret`:
|
|
```shell
|
|
kubectl apply -f storage/flexVolSMB/sealedSecret-flexVolSMB.yml
|
|
```
|
|
|
|
#### 2.3) `storageClass` for distributed block storage:
|
|
See [Longhorn Helm Chart](https://longhorn.io/):
|
|
```shell
|
|
helm repo add longhorn https://charts.longhorn.io && helm repo update
|
|
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --values=storage/Longhorn/chart-values.yml
|
|
```
|
|
|
|
Log on to the web interface and delete the default disks on each node (mounted at `/var/lib/longhorn`) and replace them with new disks mounted at `/mnt/blockstorage`.
|
|
|
|
Add additional `storageClass` with backup schedule:
|
|
***After** specifying a NFS backup target (syntax: `nfs://servername:/path/to/share`) through Longhorn's dashboard*
|
|
```yaml
|
|
kind: StorageClass
|
|
apiVersion: storage.k8s.io/v1
|
|
metadata:
|
|
name: longhorn-dailybackup
|
|
provisioner: driver.longhorn.io
|
|
allowVolumeExpansion: true
|
|
parameters:
|
|
numberOfReplicas: "3"
|
|
staleReplicaTimeout: "2880"
|
|
fromBackup: ""
|
|
recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]'
|
|
```
|
|
Then make this the new default `storageClass`:
|
|
```shell
|
|
kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
|
#kubectl delete storageclass longhorn
|
|
```
|
|
|
|
### 3) Ingress Controller
|
|
Reconfigure default Traefik configuration:
|
|
See [Traefik 2.x Helm Chart](https://github.com/traefik/traefik-helm-chart) and [HelmChartConfig](https://docs.k3s.io/helm)
|
|
```shell
|
|
kubectl apply -f ingress/Traefik2.x/helmchartconfig-traefik.yaml
|
|
```
|
|
|
|
### 4) GitOps
|
|
##### 4.1) Install Helm Chart
|
|
See [ArgoCD](https://argo-cd.readthedocs.io/en/stable/getting_started/#getting-started):
|
|
```shell
|
|
helm repo add argo https://argoproj.github.io/argo-helm
|
|
helm repo update
|
|
helm install argo-cd -n argo-cd --create-namespace argo/argo-cd --values system/ArgoCD/chart-values.yml
|
|
```
|
|
|
|
Retrieve initial password:
|
|
```shell
|
|
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d; echo
|
|
```
|
|
Login with username `admin` and the initial password, browse to `User Info` and `Update Password`.
|
|
|
|
Create ArgoCD applicationset
|
|
```shell
|
|
kubectl apply -f system/ArgoCD/applicationset-homelab.yml
|
|
```
|
|
### 5) Services
|
|
##### 5.1) [Argus]() <small>(release management)</small>
|
|
```shell
|
|
kubectl apply -f services/Argus
|
|
```
|
|
##### 5.2) [Authelia]() <small>(single sign-on))</small>
|
|
```shell
|
|
kubectl apply -f services/Authelia
|
|
```
|
|
##### 5.3) [Vaultwarden](https://github.com/dani-garcia/vaultwarden) <small>(password manager)</small>
|
|
*Requires [mount.cifs](https://linux.die.net/man/8/mount.cifs)' option `nobrl`*
|
|
```shell
|
|
kubectl apply -f services/Vaultwarden
|
|
```
|
|
##### 5.4) [DDclient](https://github.com/linuxserver/docker-ddclient) <small>(dynamic dns)</small>
|
|
```shell
|
|
kubectl apply -f services/DDclient
|
|
```
|
|
##### 5.5) [Gitea](https://gitea.io/) <small>(git repository)</small>
|
|
```shell
|
|
kubectl apply -f services/Gitea
|
|
```
|
|
##### 5.6) [Gotify](https://gotify.net/) <small>(notifications)</small>
|
|
```shell
|
|
kubectl apply -f services/Gotify
|
|
```
|
|
##### 5.7) [Guacamole](https://guacamole.apache.org/doc/gug/guacamole-docker.html) <small>(remote desktop gateway)</small>
|
|
*Requires specifying a `uid` & `gid` in both the `securityContext` of the db container and the `persistentVolume`*
|
|
```shell
|
|
kubectl apply -f services/Guacamole
|
|
```
|
|
Wait for the included containers to start, then perform the following commands to initialize the database:
|
|
```shell
|
|
kubectl exec -n guacamole -i guacamole-<pod-id> --container guacamole -- /opt/guacamole/bin/initdb.sh --postgresql > initdb.sql
|
|
kubectl exec -n guacamole -i guacamole-<pod-id> --container db -- psql -Uguacamole -f - < initdb.sql
|
|
kubectl rollout restart deployment -n guacamole guacamole
|
|
```
|
|
|
|
##### 5.8) [Lighttpd](https://www.lighttpd.net/) <small>(webserver)</small>
|
|
*Serves various semi-containerized websites; respective webcontent is stored on fileshare*
|
|
```shell
|
|
kubectl apply -f services/Lighttpd/configMap-Lighttpd.yml
|
|
kubectl apply -f services/Lighttpd/deploy-Lighttpd.yml
|
|
```
|
|
##### 5.9) PVR `namespace` <small>(automated media management)</small>
|
|
*Containers use shared resources to be able to interact with downloaded files*
|
|
```shell
|
|
kubectl create secret generic --type=mount/smb smb-secret --from-literal=username=<<omitted>> --from-literal=password=<<omitted>> -n pvr
|
|
kubectl apply -f services/PVR/persistentVolumeClaim-PVR.yml
|
|
kubectl apply -f services/PVR/storageClass-PVR.yml
|
|
```
|
|
###### 5.9.1) [Plex](https://www.plex.tv/) <small>(media library)</small>
|
|
*Due to usage of symlinks, partially incompatible with SMB-share-backed storage*
|
|
```shell
|
|
kubectl apply -f services/PVR/deploy-Plex.yml
|
|
```
|
|
After deploying, Plex server needs to be *claimed* (=assigned to Plex-account):
|
|
```shell
|
|
kubectl get endpoints Plex -n PVR
|
|
```
|
|
Browse to the respective IP address (http://<nodeipaddress>:32440/web) and follow instructions.
|
|
###### 5.9.2) [Prowlarr](https://github.com/Prowlarr/Prowlarr) <small>(indexer management)</small>
|
|
```shell
|
|
kubectl apply -f services/PVR/deploy-Prowlarr.yml
|
|
```
|
|
###### 5.9.3) [Radarr](https://radarr.video/) <small>(movie management)</small>
|
|
```shell
|
|
kubectl apply -f services/PVR/deploy-Radarr.yml
|
|
```
|
|
###### 5.9.4) [SABnzbd](https://sabnzbd.org/) <small>(download client)</small>
|
|
```shell
|
|
kubectl apply -f services/PVR/deploy-SABnzbd.yml
|
|
```
|
|
###### 5.9.5) [Sonarr](https://sonarr.tv/) <small>(tv management)</small>
|
|
```shell
|
|
kubectl apply -f services/PVR/deploy-Sonarr.yml
|
|
```
|
|
|
|
### 6) Miscellaneous
|
|
*Various notes/useful links*
|
|
|
|
* Replacement for [not-yet-deprecated](https://github.com/kubernetes/kubectl/issues/151) `kubectl get all -A`:
|
|
|
|
|
|
kubectl get $(kubectl api-resources --verbs=list -o name | paste -sd, -) --ignore-not-found --all-namespaces
|
|
* `DaemonSet` to configure nodes' **sysctl** `fs.inotify.max-user-watches`:
|
|
|
|
|
|
kubectl apply -f system/InotifyMaxWatchers/daemonSet-InotifyMaxWatchers.yml
|
|
* Debug DNS lookups within the cluster:
|
|
|
|
|
|
kubectl run -it --rm dnsutils --restart=Never --image=gcr.io/kubernetes-e2e-test-images/dnsutils -- nslookup [-debug] [fqdn]
|
|
or
|
|
|
|
kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn]
|
|
* Delete namespaces stuck in `Terminating` state:
|
|
*First* check whether there are any resources still present; preventing the namespace from being deleted:
|
|
|
|
kubectl api-resources --verbs=list --namespaced -o name \
|
|
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
|
|
|
|
Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`)
|
|
If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes):
|
|
|
|
kubectl get namespace <namespace> -o json | jq -j '.spec.finalizers=null' > tmp.json
|
|
kubectl replace --raw "/api/v1/namespaces/<namespace>/finalize" -f ./tmp.json
|
|
rm ./tmp.json
|