Compare commits

..

No commits in common. "dc280c06ea5ef22c64cfe8ed487b9203501406d0" and "94ec6be3ac8d1f74ac5fba0ef12d6547b54faee2" have entirely different histories.

2 changed files with 36 additions and 2 deletions

View File

@ -118,6 +118,26 @@ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-nam
Log on to the web interface and delete the default disks on each node (mounted at `/var/lib/longhorn`) and replace them with new disks mounted at `/mnt/blockstorage`.
Add additional `storageClass` with backup schedule:
***After** specifying a NFS backup target (syntax: `nfs://servername:/path/to/share`) through Longhorn's dashboard*
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-dailybackup
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]'
```
Then make this the new default `storageClass`:
```shell
kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
#kubectl delete storageclass longhorn
```
### 3) Ingress Controller
Reconfigure default Traefik configuration:
@ -141,6 +161,10 @@ kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.pas
```
Login with username `admin` and the initial password, browse to `User Info` and `Update Password`.
Create ArgoCD applicationset
```shell
kubectl apply -f system/ArgoCD/applicationset-homelab.yml
```
### 5) Services
##### 5.1) [Argus]() <small>(release management)</small>
```shell
@ -237,4 +261,15 @@ kubectl apply -f services/PVR/deploy-Sonarr.yml
or
kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn]
* Delete namespaces stuck in `Terminating` state:
*First* check whether there are any resources still present; preventing the namespace from being deleted:
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`)
If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes):
kubectl get namespace <namespace> -o json | jq -j '.spec.finalizers=null' > tmp.json
kubectl replace --raw "/api/v1/namespaces/<namespace>/finalize" -f ./tmp.json
rm ./tmp.json

View File

@ -11,8 +11,7 @@ spec:
sources:
- repoURL: https://dl.gitea.com/charts/
chart: gitea
# targetRevision: 10.6.0
targetRevision: 11.0.0
targetRevision: 10.6.0
helm:
valueFiles:
- $values/services/Gitea/values.yaml