Compare commits
2 Commits
94ec6be3ac
...
dc280c06ea
Author | SHA1 | Date | |
---|---|---|---|
dc280c06ea | |||
0cf244959d |
35
README.md
35
README.md
@ -118,26 +118,6 @@ helm install longhorn longhorn/longhorn --namespace longhorn-system --create-nam
|
||||
|
||||
Log on to the web interface and delete the default disks on each node (mounted at `/var/lib/longhorn`) and replace them with new disks mounted at `/mnt/blockstorage`.
|
||||
|
||||
Add additional `storageClass` with backup schedule:
|
||||
***After** specifying a NFS backup target (syntax: `nfs://servername:/path/to/share`) through Longhorn's dashboard*
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: longhorn-dailybackup
|
||||
provisioner: driver.longhorn.io
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
numberOfReplicas: "3"
|
||||
staleReplicaTimeout: "2880"
|
||||
fromBackup: ""
|
||||
recurringJobs: '[{"name":"backup", "task":"backup", "cron":"0 0 * * *", "retain":14}]'
|
||||
```
|
||||
Then make this the new default `storageClass`:
|
||||
```shell
|
||||
kubectl patch storageclass longhorn-dailybackup -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
#kubectl delete storageclass longhorn
|
||||
```
|
||||
|
||||
### 3) Ingress Controller
|
||||
Reconfigure default Traefik configuration:
|
||||
@ -161,10 +141,6 @@ kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.pas
|
||||
```
|
||||
Login with username `admin` and the initial password, browse to `User Info` and `Update Password`.
|
||||
|
||||
Create ArgoCD applicationset
|
||||
```shell
|
||||
kubectl apply -f system/ArgoCD/applicationset-homelab.yml
|
||||
```
|
||||
### 5) Services
|
||||
##### 5.1) [Argus]() <small>(release management)</small>
|
||||
```shell
|
||||
@ -261,15 +237,4 @@ kubectl apply -f services/PVR/deploy-Sonarr.yml
|
||||
or
|
||||
|
||||
kubectl run -it --rm busybox --restart=Never --image=busybox:1.28 -- nslookup api.github.com [-debug] [fqdn]
|
||||
* Delete namespaces stuck in `Terminating` state:
|
||||
*First* check whether there are any resources still present; preventing the namespace from being deleted:
|
||||
|
||||
kubectl api-resources --verbs=list --namespaced -o name \
|
||||
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
|
||||
|
||||
Any resources returned should be deleted first (worth mentioning: if you get an error `error: unable to retrieve the complete list of server APIs`, you should check `kubectl get apiservice` for any apiservice with a status of `False`)
|
||||
If there are no resources left in the namespace, and it is still stuck *terminating*, the following commands remove the blocking finalizer (this is a last resort, you are bypassing protections put in place to prevent zombie processes):
|
||||
|
||||
kubectl get namespace <namespace> -o json | jq -j '.spec.finalizers=null' > tmp.json
|
||||
kubectl replace --raw "/api/v1/namespaces/<namespace>/finalize" -f ./tmp.json
|
||||
rm ./tmp.json
|
||||
|
@ -11,7 +11,8 @@ spec:
|
||||
sources:
|
||||
- repoURL: https://dl.gitea.com/charts/
|
||||
chart: gitea
|
||||
targetRevision: 10.6.0
|
||||
# targetRevision: 10.6.0
|
||||
targetRevision: 11.0.0
|
||||
helm:
|
||||
valueFiles:
|
||||
- $values/services/Gitea/values.yaml
|
||||
|
Loading…
x
Reference in New Issue
Block a user