295 Commits

Author SHA1 Message Date
5740faeb9d feat: Add cli binary
All checks were successful
Container & Helm chart / Linting (push) Successful in 7s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m9s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 34m49s
2024-06-15 19:48:17 +10:00
e057f313ea chore: Ensure api availability 2024-06-15 19:47:44 +10:00
ac38731dcf chore: Configure argo workflows permissions
All checks were successful
Container & Helm chart / Linting (push) Successful in 1m35s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 2m15s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 32m49s
2024-06-14 12:32:06 +10:00
9cbb84a0f3 chore: Remove redundant node template injection task
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m12s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 31m55s
2024-06-12 22:10:38 +10:00
066ec9a967 chore: Remove redundant kustomize patch 2024-06-12 22:10:16 +10:00
dda14af238 chore: Refactor jq keys according to govc output
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 58s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 38m38s
2024-06-12 17:01:12 +10:00
2db1c4d623 chore: Add deployment playbook
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 49s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 40m27s
2024-06-12 12:28:58 +10:00
1451e8f105 chore: Create target namespaces proactively 2024-06-12 12:27:17 +10:00
baf809159b chore: Fix incorrect variable reference
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 58s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 41m3s
2024-06-12 10:30:01 +10:00
066a21b1d2 chore: Align controller details with latest chart versions
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m16s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 37m13s
2024-06-11 21:58:14 +10:00
46fe962e77 chore: Duplicate certificate provisioner w/ custom claims 2024-06-11 21:57:38 +10:00
74070f266c feat: Include new component argo workflows 2024-06-11 21:57:00 +10:00
20f28f7d8a chore: Correctly inject chart values
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 51s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 35m41s
2024-06-11 12:00:45 +10:00
2802b49d02 chore: Fix incorrect task module 2024-06-11 12:00:08 +10:00
594e62cf71 feat: Remove node-template hypervisor upload logic (treat as prerequisite instead)
Some checks failed
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 50s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 6m31s
2024-06-11 11:25:35 +10:00
544f98a8fb chore: Add Traefik persistent volume permissions workaround
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m4s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 36m45s
2024-06-10 22:19:29 +10:00
562e0b8167 build: Cleanup virtual machine after builds 2024-06-10 15:59:19 +10:00
88e37bb706 chore: Fix outdated helm chart value syntax
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m20s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 39m50s
2024-06-10 15:48:30 +10:00
8323668aeb chore: Revert OS version
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m5s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 41m57s
2024-06-09 22:09:03 +10:00
afe29e3407 chore: Define gitea token scopes
Some checks failed
Container & Helm chart / Linting (push) Successful in 1m8s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 2m11s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Has been cancelled
2024-06-09 17:09:37 +10:00
2473aa05dc feat: Upgrade metacluster OS and K3s versions 2024-06-09 17:09:17 +10:00
7870ef8cf0 chore: Rebase node-template
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m23s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 41m11s
2024-06-08 21:57:45 +10:00
e42479f214 chore: Align metacluster/workloadcluster components
All checks were successful
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 2m32s
Container & Helm chart / Linting (push) Successful in 1m5s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 37m27s
2024-06-07 13:08:43 +10:00
d459c98045 chore: Align export/working directory 2024-06-07 13:08:18 +10:00
bf7ccc8962 chore: Upgrade workflow job container image
Some checks failed
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 28s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 37m12s
2024-06-07 12:26:44 +10:00
75309bdf11 feat: Upgrade components
Some checks failed
Container & Helm chart / Linting (push) Successful in 1m31s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 2m19s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 1s
2024-06-07 12:22:53 +10:00
1469ba08d8 build: Change export target directory 2024-06-07 12:22:22 +10:00
8764634ea0 fix: Upgrade chart and override image repository
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 55s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 28m32s
2024-06-07 11:26:40 +10:00
1b4c4b5d64 build: Cleanup workflow action steps 2024-06-06 16:54:05 +10:00
8c1016a231 build: Prepare dependencies / inject configuration
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m2s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 35m9s
2024-06-06 16:34:45 +10:00
663804e1b6 build: Switch to transfer through hypervisor host
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 44s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 10m3s
2024-06-06 15:56:27 +10:00
33ba3771cc chore: Refactor step syntax
Some checks failed
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 31s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 24s
2024-06-06 13:03:47 +10:00
c05b58e93a chore: Debug packer build error
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m2s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 23s
2024-06-06 13:00:44 +10:00
29c89e16ee build: Escape characters in hypervisor configuration
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m2s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 23s
2024-06-06 12:36:37 +10:00
201c6f8bca build: Update action container image #2
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 59s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 57s
2024-06-06 12:26:59 +10:00
67f91f2962 build: Update action container image
Some checks failed
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 41s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 47s
2024-06-06 11:48:28 +10:00
be02f884ac build: Switch container image
Some checks failed
Container & Helm chart / Linting (push) Successful in 19s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 58s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 43s
2024-06-05 22:25:59 +10:00
8037d4d1c7 build: Revert packer parameter syntax
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 32s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 22s
2024-06-05 13:09:29 +10:00
d99fca654f chore: Refactor string substitutions
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 49s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 21s
2024-06-05 13:05:30 +10:00
7d431d88c3 build: Remove redundant packer parameters
Some checks failed
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 55s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 20s
2024-06-05 12:58:22 +10:00
691dcee21f chore: Debug packer validate workflow step #10
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 35s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 21s
2024-06-05 12:44:18 +10:00
95c14fc385 chore: Debug packer validate workflow step #9
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 47s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 19s
2024-06-05 12:39:33 +10:00
33cd272d53 chore: Debug packer validate workflow step #8
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 58s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 21s
2024-06-05 12:27:01 +10:00
e46e5c0802 chore: Debug packer validate workflow step #7
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 27s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 18s
2024-06-05 12:23:16 +10:00
2d468d8b83 chore: Debug packer validate workflow step #6
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 53s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 20s
2024-06-05 12:21:14 +10:00
f45b96f42b chore: Debug packer validate workflow step #5
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m1s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 20s
2024-06-05 12:18:27 +10:00
4dc5a6ed39 chore: Debug packer validate workflow step #4
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 39s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 19s
2024-06-05 12:14:46 +10:00
34af03ca99 chore: Debug packer validate workflow step #3
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 51s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 19s
2024-06-05 12:12:15 +10:00
55c594d242 chore: Debug packer validate workflow step #2
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 57s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 21s
2024-06-05 12:03:46 +10:00
8555e677b3 chore: Debug packer validate workflow step
Some checks failed
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 53s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Failing after 21s
2024-06-05 11:58:40 +10:00
3cefd67458 chore: Attempt more verbose console output #2
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 35s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 19s
2024-06-05 11:56:12 +10:00
1a0e674aa8 chore: Attempt more verbose console output
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m2s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 21s
2024-06-05 11:52:37 +10:00
6e37fd756b build: Enable packer build step
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 1m6s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 20s
2024-06-05 11:28:17 +10:00
6568acf541 build: Populate/Reference packer parameters
All checks were successful
Container & Helm chart / Linting (push) Successful in 6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 49s
Container & Helm chart / Kubernetes Bootstrap Appliance (push) Successful in 30s
2024-06-05 11:16:12 +10:00
092ce5eabc build: Refactor/Update packer configuration 2024-06-05 10:57:05 +10:00
a785e57126 build: Align steps between jobs 2024-06-05 10:43:39 +10:00
71e9957122 build: Add repository checkout workflow step
All checks were successful
Container & Helm chart / Linting (push) Successful in 5s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 59s
Container & Helm chart / Container image (push) Successful in 1m20s
2024-06-05 10:40:51 +10:00
877fc24235 build: Attempt initial actions workflow
Some checks failed
Container & Helm chart / Linting (push) Failing after 1m6s
Container & Helm chart / Semantic Release (Dry-run) (push) Successful in 2m25s
Container & Helm chart / Container image (push) Has been skipped
2024-06-05 10:33:01 +10:00
778a7581c0 chore: Remove stray colon
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-24 16:33:12 +02:00
d1bce54a2d chore: Refactor dictionary structure
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-24 16:29:06 +02:00
a8cb53429d fix: Incorrect escape sequence
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-24 12:59:01 +02:00
e1f83f2245 fix: Incorrect module parameter name
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-24 09:56:31 +02:00
1cfca1fa4a chore: Temporarily disable 'no_log'
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-24 09:38:11 +02:00
27cb500b8c chore: Revert debugging 2023-10-24 09:31:33 +02:00
720bc43546 fix: Switch ansible module
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-24 09:28:24 +02:00
49b8b80db0 fix: Update cloud-init file contents
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-24 09:13:33 +02:00
3bc3da54be chore: Retain VM to allow debugging
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-23 14:06:32 +02:00
8e7e23c8bc feat: Upgrade base OS version
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-22 20:49:53 +02:00
e4cfc26e2c fix: Incorrect dictionary key reference 2023-10-22 20:49:35 +02:00
3b89aed52b fix: Prevent parsing of list keys
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-22 20:28:54 +02:00
5cdd6ef052 feat: Include pinniped local-user-authenticator
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-22 15:20:34 +02:00
ef8766b5ca feat: Upgrade pinniped to v0.27.0
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-21 15:37:34 +02:00
ab14a966e0 fix: Incorrect path to cluster api provider manifests
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-20 14:49:29 +02:00
f6961b5e3a fix: Update kustomization template with correct paths
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-18 14:47:36 +02:00
c1a8a35494 fix: Add missing path to cluster api provider manifests
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-12 15:44:39 +02:00
ba7e233c27 feat: Store cluster API provider manifests
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-12 11:24:56 +02:00
8c6a9f38ba fix: Update IPPool template to new CRD
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-06 13:16:38 +02:00
bf3d7ed239 feat: Upgrade CAPI/CAPV/CAIP and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-05 11:29:31 +02:00
0509a7cb8a feat: Upgrade Pinniped
All checks were successful
continuous-integration/drone/push Build is passing
2023-09-29 16:22:21 +02:00
01601de897 fix: Upgrade Pinniped chart
All checks were successful
continuous-integration/drone/push Build is passing
2023-09-23 12:23:56 +02:00
a2198f1109 fix: Force playbook colour output 2023-09-23 12:23:39 +02:00
7cc8fbbccb fix: Clean untracked files in git repo through git_acp module
All checks were successful
continuous-integration/drone/push Build is passing
2023-09-22 12:10:02 +02:00
da0558711c fix: Fix volume secret name reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-25 16:16:26 +02:00
90082ca36a fix: Inject ca-bundle into gitea container
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-25 14:13:01 +02:00
b2ae56e54b build: Split nodepool manifest in separate documents 2023-08-25 11:39:02 +02:00
b21b8b5376 fix: Missing filename
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-25 09:30:20 +02:00
931eaf366c fix: Add retries to wait for pinniped-concierge to come online 2023-08-25 09:29:58 +02:00
32dda728cb fix: Generate and store kubeconfig in repository
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-24 18:24:24 +02:00
4c1f1fce5e fix: Add playbook scoped variable 2023-08-24 17:41:35 +02:00
bb58e287b7 fix: Change CIDR subnet block #2
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-24 15:28:09 +02:00
ef58b823c2 fix: Change CIDR subnet block
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-24 12:27:49 +02:00
5000c324e1 fix: Register correct redirect/callback url
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-24 10:05:08 +02:00
87e89cfa27 fix: Incorrect linebreak in ca-bundle 2023-08-24 10:04:38 +02:00
ac5d3e3668 fix: Create folderstructure after cloning git repository 2023-08-24 09:47:58 +02:00
616f8b9a53 fix: Incorrect path 2023-08-24 09:47:16 +02:00
2c5e8e10b5 fix: Inject line break in ca-bundle through variable
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-23 17:17:54 +02:00
17ad64013a build: Move !unsafe data type declaration
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-23 16:43:25 +02:00
eb2ada2164 chore: Refactor git commands to git_acp module 2023-08-23 14:31:09 +02:00
3e3a92c344 fix: Rename duplicate keys
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-23 14:26:49 +02:00
d86f70a458 fix: Remove redundant dictionary key
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-23 14:04:39 +02:00
436995accc chore: Refactor playbook for idempotency 2023-08-23 14:03:25 +02:00
0310bb9d1a fix: Incorrect indentation
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-23 13:46:55 +02:00
21f03ba048 fix: Incorrect secret types;Missing newline in ca-bundle 2023-08-23 13:46:44 +02:00
b009395f62 chore: Fix/Remove incorrect/redundant key references
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-22 21:17:18 +02:00
2110eb9e2c build: Quote jinja templating delimiters
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-22 13:08:06 +02:00
423ecc2f95 fix: Rebase pinniped-concierge on workload-cluster to bitnami chart
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-22 12:54:07 +02:00
1a1440f751 build: Rebase pinniped to bitnami helm chart
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-22 12:02:13 +02:00
b17501ee1d fix: Trim digest when applying pinniped manifest; Add ingressroute
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-21 11:59:41 +02:00
87eb5e0dd7 build: Fix typo in manifest parser
All checks were successful
continuous-integration/drone/push Build is passing
2023-08-21 09:44:11 +02:00
f5ed60fa38 build: Move to external packer plugins 2023-08-21 09:39:05 +02:00
eab5cfc688 fix: Trim digest when parsing pinniped manifest
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-21 09:31:11 +02:00
05b271214c feat: Switch authentication provider to pinniped
Some checks failed
continuous-integration/drone/push Build is failing
2023-08-21 09:02:33 +02:00
455a2e14be fix: Avoid regex_replace pattern duplication
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-28 13:23:59 +02:00
f5154f6961 fix: Incorrect nesting of dictionary and filters 2023-07-28 13:22:23 +02:00
4bf5121086 fix: Remove redundant combine filter
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-19 22:46:51 +02:00
393b1092e5 fix: Refactor version endpoint json creation to guarantee variable substitution
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-19 16:46:23 +02:00
36c30ca646 fix: Aggregate dictionary content within respective component task list
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-18 16:05:54 +02:00
8005b172a5 feat: Include manifests in version endpoint
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-16 14:52:41 +02:00
13f4965278 fix: Upgrade version endpoint component
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-16 11:48:28 +02:00
05f085aee7 feat: Preconfigure root profile for cli tools
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-15 19:09:44 +02:00
072fc56050 fix: Refactor to make step-ca initialization idempotent 2023-07-15 19:08:33 +02:00
5363eba1a3 fix: Upgrade version endpoint component
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-14 15:38:49 +02:00
a245cc3d48 feat: Dynamically fill version endpoint database
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-14 13:49:02 +02:00
51c477fb07 feat: Upgrade version endpoint component
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-14 10:48:18 +02:00
1446cba537 fix: Change API call encoding
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-13 12:18:33 +02:00
0501a035f2 fix: Upgrade component version
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-12 11:49:48 +02:00
6e942af974 fix: Revert skopeo transport when storing container images
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-11 14:54:57 +02:00
89874d57ce fix: Explicitly convert child dictionary to json
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-11 11:53:37 +02:00
2b497d4653 fix: Fix playbook tasklist order 2023-07-11 11:42:12 +02:00
cfa4a5379a feat: Switch to OCI-archive for container storage 2023-07-11 11:41:33 +02:00
a2c2766ff7 fix: Remove non-functional variable references
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-08 18:09:48 +02:00
76d3b6c742 build: Include semantic release dry-run logic
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-07 17:31:32 +02:00
a5248bd54c build: Add missing variables
Some checks failed
continuous-integration/drone/push Build is failing
2023-07-07 17:12:20 +02:00
cbedc9679f feat: Add version/metadata API endpoint
Some checks failed
continuous-integration/drone/push Build is failing
2023-07-07 16:48:32 +02:00
740b6b3dc9 build: Disable parallel builds entirely
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-07 14:33:47 +02:00
0ba87988bc fix: Incorrect indentation causing malformed PEM file
Some checks failed
continuous-integration/drone/push Build is failing
2023-07-07 10:30:20 +02:00
aa14a8a3a8 fix: Refactor kustomize templates
All checks were successful
continuous-integration/drone/push Build is passing
2023-07-06 13:01:35 +02:00
1f55ff7cfa build: Revert to semi-working syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 16:30:49 +02:00
ba4a0148ff build: Try different syntax (remove quotes)
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 15:44:45 +02:00
c177dbd03b build: Test different syntax for character escape
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 15:40:39 +02:00
2e8ce6cc00 build: Escape escape sequence characters
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 15:37:36 +02:00
7fd1cf73db build: Fix linebreak
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 15:35:09 +02:00
cf001cd0ce build:Test explicit tag format
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-20 15:32:53 +02:00
438b944011 build: Add missing variable export
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-22 13:43:53 +02:00
679a9457b5 build:Fix variable name
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-22 12:44:13 +02:00
8b4a1e380c build:Test semantic-release + build flow
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-22 12:19:10 +02:00
0468cd6269 build:Debug echo to file
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-22 12:11:14 +02:00
b808397d47 build:Fix var substitution
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-22 12:08:01 +02:00
8fd0136bb7 build:debug brace mismatch #2
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-22 12:06:14 +02:00
479d896599 build:Debug brace mismatch
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-05-22 12:05:32 +02:00
263f156eb1 build:Try different syntax
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-05-22 11:55:28 +02:00
f1dfc83d7c build:Revert back to cli arguments while specifying custom command
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-05-22 09:23:44 +02:00
5b950a3834 build:Test with configuration in .releaserc.json
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-22 09:09:50 +02:00
978f39d45b build: Test different semantic-release plugins
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-21 18:27:34 +02:00
9b9ab6b784 build:Skip build on tag
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-21 18:09:22 +02:00
24dca2755a fix: Run semantic-release with different drone variable as input
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-21 18:02:00 +02:00
0d1db2f29f feat: Test semantic-release dry-run #2
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-21 17:51:04 +02:00
cce39a5bb7 fix:Test semantic release dry-run
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is failing
2023-05-20 15:18:23 +02:00
823cc467fa Explicitly install semantic-release plugins #2
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-20 14:32:47 +02:00
9cb89bf055 Try different syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 14:30:38 +02:00
358cbe39ea Fix quote
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 14:28:31 +02:00
0fee2df2a6 Explicitly install semantic-release plugins
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-05-20 13:44:54 +02:00
e4e58e4789 Disable npm plugin
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 13:31:48 +02:00
75158a8a5b Fix variable substitution
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 13:26:49 +02:00
c83d541a0d Remove redundant parameter
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 13:24:20 +02:00
a46610f828 Add git credentials
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 13:22:20 +02:00
fe5147bd2e Override branch during semantic-release dry-run
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 12:58:44 +02:00
6d168f0517 Add semantic-release prerequisites
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-20 12:48:40 +02:00
68445ee13f Testing semantic-release
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-20 12:44:32 +02:00
48c14afd0f New major version branch
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-19 13:43:23 +02:00
31b21c9b7a Upgrade node template OS version
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-19 12:19:42 +02:00
2addda3f06 Upgrade node template OS version;Upgrade K8s minor version
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-19 12:19:06 +02:00
e03cd20d65 Replay upstream changes;Upgrade to latest minor K8s version
Some checks failed
continuous-integration/drone/push Build is failing
2023-05-19 11:38:53 +02:00
fd1c306061 Add workload-cluster worker-node size property
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-26 09:17:41 +02:00
ca8044b4ab Workaround to support self-signed vCenter certificate
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-25 09:01:10 +02:00
3c98e16e74 Update longhorn settings
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-19 08:43:34 +02:00
1860d8e2dd Configure longhorn through node label;Update version
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-18 21:50:26 +02:00
16fdd66328 Hide redundant parameter;Configure oidc provider
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-14 09:51:59 +02:00
d73320da32 Add quotes
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-13 10:47:52 +02:00
572b7df74c Switch OIDC provider
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-13 10:11:13 +02:00
ee08fd47b5 Configure keycloakx;Convert output to yaml
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-11 10:40:33 +02:00
75277e285a Switch oidc provider
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-09 20:04:11 +02:00
debe80a2a1 Fix url
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-07 13:19:57 +02:00
2534cea4a0 Pin k3s install.sh version
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-04-07 13:14:34 +02:00
05c3a09ab3 Upgrade k3s version
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-07 11:59:29 +02:00
2f91c0f7c3 Move kustomize pattern to strategic merge;Fix regex patterns;Update description
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-07 10:29:41 +02:00
c385baf630 Housekeeping;Add separate storage nodepool
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-06 13:29:29 +02:00
5c18869d60 Fix missing namespaces;Add default empty value
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-06 08:47:37 +02:00
1941e02d94 Fix post-processor paths
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-05 17:30:24 +02:00
610495e424 Add random vm name postfix
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-05 17:01:36 +02:00
4e6a0549b5 Remove redundant task;Refactor packer vm name
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-05 16:51:42 +02:00
db090ac564 Add missing kustomize patch;Switch to query filter
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-05 13:37:31 +02:00
2b56677e9a Remove regex_replace filter;Refactor dict key names;Make chart values optional
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-05 10:47:17 +02:00
641ee2d9a7 Rename nodepools
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-05 09:46:05 +02:00
979ac38794 Aggregate/store workload cluster chart values
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-04 22:44:56 +02:00
86a0b684e2 Add missing key
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-04 20:59:54 +02:00
56a33134a0 Housekeeping;Move inclusterippool to gitops;Delete temporary manifests;Align resource naming;Remove redundant config;Add helm configuration
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-04 17:22:39 +02:00
915660f618 Housekeeping;Populate all registry mirrors;Disable manifest image reference workaround;Add missing key;Remove redundant filter
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-04 10:47:28 +02:00
d0c4251e06 Configure registry mirrors on workload-cluster nodes;Test ansible collection paths #2
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-03 21:37:09 +02:00
9ff0e09625 Test ansible collection paths
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 21:20:10 +02:00
8e76617794 Fix repository path;Add chart values;Fix ansible galaxy cli syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 17:25:45 +02:00
5a82c9e122 Fix key name;Fix task dependencies
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 16:32:05 +02:00
cde92b4514 Fix indentation
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 16:29:16 +02:00
6942c33ae8 Fix Ansible templating;DRY
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-04-03 16:26:38 +02:00
7ac4cc0914 Avoid parallel build issues #3
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-03 15:43:10 +02:00
c054c76b60 Avoid parallel build issues #2
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 15:00:06 +02:00
25230fdda2 Avoid parallel build issues
Some checks failed
continuous-integration/drone/push Build is failing
2023-04-03 14:50:58 +02:00
89cf69adc7 Refactor cluster registration
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-02 22:36:23 +02:00
3f9fc4b7aa Fix git repository organization;Move cluster api manifests to gitops;Rename gitrepo's
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-01 16:10:38 +02:00
570047df3b Fix target paths;Add git repositories
All checks were successful
continuous-integration/drone/push Build is passing
2023-04-01 13:43:36 +02:00
d187f60091 Remove redundant key
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-31 18:27:45 +02:00
933615adeb Refactor gitops repositories;Move capi manifests to subfolder;Sort components in tty console message;Generalize templates
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-31 18:19:13 +02:00
1c60214f5a Add repositories;Push manifests;Change protocol
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-31 14:25:03 +02:00
414b72bcb8 Fix path
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-29 22:53:56 +02:00
29396de154 Inject CPI image tag into manifest
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-29 22:01:43 +02:00
5effe00c19 Upgrade version
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-29 13:53:56 +02:00
767be3b8f5 Upgrade CAPI/CAPV
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-28 22:36:21 +02:00
eb2f491f72 Refactor git repo creation;Housekeeping
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-28 22:31:11 +02:00
cd5fa89a0d Fix key reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-28 16:41:48 +02:00
d7e8685225 Download workloadcluster helm-charts;Revert foldernames;Setup git repositories
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-28 13:49:18 +02:00
5113dd5b6c Set default values to optional vapp properties
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-25 23:01:23 +01:00
89fd23f66a Reference node template by inventory path
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-25 19:13:11 +01:00
fa0b72a903 Remove git repo logic; Debug ova templates
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-25 16:58:23 +01:00
ec6f712427 Add healthcheck;Improve console healthchecks;Increase default retries
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-23 16:55:11 +01:00
1c19708855 Increase default retries;Add healthcheck
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-23 16:51:17 +01:00
942c13dde7 Improve console healthchecks
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-23 11:42:34 +01:00
439223c56e Build n-1 version
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-22 08:54:45 +01:00
b644dc1a04 Fix indentation;Add partition/filesystem/mount specification;Fix disk unit number
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-20 12:40:39 +01:00
2de2259c76 Fix linting faults;Add missing keys
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-19 17:43:29 +01:00
214a3d189a Update K8s version
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-19 13:17:14 +01:00
df91de5516 Fix missing keys;Add core deployment option
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-18 20:08:34 +01:00
68f0524bda Housekeeping;Disable redundant hook;Add configurable data-disk
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-18 18:25:09 +01:00
ff555ce0de Add missing keys
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-16 09:15:05 +01:00
ad0c511651 Fix var reference 2023-03-15 15:02:00 +01:00
bae9696023 Fix variable reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-15 10:56:55 +01:00
23e1ec1e71 Add missing linebreak
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-15 10:26:42 +01:00
6bd49750a4 Add missing key/parameter;Fix dependency type;Add k8s version to filename
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-15 10:24:45 +01:00
daa7a240cc Switch from Network Protocol Profiles to in-cluster IPAM
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-14 15:43:11 +01:00
c0b2857be1 Split CAPI cluster manifest;Remove debugging;Add dependency
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-14 14:28:24 +01:00
925bc5be39 Add missing comma
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-09 12:47:28 +01:00
b6a03484e1 Add container registry login
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-09 12:45:51 +01:00
462aebdf17 Fix ephemeral storage out of disk space
Some checks reported errors
continuous-integration/drone/push Build was killed
2023-03-09 11:13:36 +01:00
230dc5e0cd Force variable type;Ensure minimum default value for storage_benchmark;Fix type mismatch
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-09 09:59:45 +01:00
f47777763a Add serviceaccount token creation;Base delays on storage benchmark
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-08 17:07:44 +01:00
cabf813daa Add crude storage benchmark
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-08 13:35:56 +01:00
5aa2141f84 Latest K8s version
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-04 16:06:11 +01:00
70a4962afa Upgrade versions
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-03 14:17:45 +01:00
e8da87afd8 Revert OvfTransport changes;Build with new CAPV template
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-03 10:25:05 +01:00
90efda336a Test without ovfenvironment transport enabled
Some checks failed
continuous-integration/drone/push Build is failing
2023-03-02 17:04:06 +01:00
8f17551d50 Downgrade node-template OS version
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-02 12:28:46 +01:00
75e2250a50 Test custom-built nodetemplate
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-02 10:23:38 +01:00
6c9a8e4abd Upgrade version
All checks were successful
continuous-integration/drone/push Build is passing
2023-03-02 09:16:51 +01:00
6940bdb1d3 Testing multiline description #2;Improve parameter UX;Upgrade versions
Some checks failed
continuous-integration/drone/push Build is failing
2023-02-28 11:16:14 +01:00
64644d7eff Add node sizing parameters;Add description
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-27 10:00:07 +01:00
c9c8b79891 Fix key reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-22 21:12:23 +01:00
8aa42ffb67 Fix chart version
Some checks failed
continuous-integration/drone/push Build is failing
2023-02-22 20:49:02 +01:00
0f335d6841 Fix missing key in dict
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2023-02-22 20:47:41 +01:00
0170ee7944 Include kube-prometheus-stack
Some checks failed
continuous-integration/drone/push Build is failing
2023-02-22 17:33:37 +01:00
e0726f858c Housekeeping;Upgrade versions
Some checks failed
continuous-integration/drone/push Build is failing
2023-02-17 17:43:41 +01:00
b7a3669681 Upgrade version;Disable git rebase/push
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-13 22:13:22 +01:00
438b40dd53 Add quotes
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-13 13:05:44 +01:00
9e7eaf2ff7 Fix var reference;Upgrade version;Add LDAP configuration
Some checks failed
continuous-integration/drone/push Build is failing
2023-02-13 12:04:32 +01:00
7931b1ed44 Upgrade version;Housekeeping;Reduce verbosity;Sanitize input;Fix url reference;Test Dex
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-10 15:12:54 +01:00
b8cb76e7ac Reduce loop list length;Test vApp properties
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-07 22:11:09 +01:00
4d2513c1a5 Adjust retries/delay;Upgrade version
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-07 17:47:00 +01:00
df069672f3 Upgrade versions
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-07 11:58:44 +01:00
27106b1f34 Add upgrade tasks;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-06 14:17:05 +01:00
abcf530b49 Decrease provisioner pause;Apply bug workaround;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-06 10:59:41 +01:00
1c950086fa Remove redundant conditional;Remove debugging;Fix module parameters;Fix var reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-05 10:38:12 +01:00
ede82ea7e7 Fix conditional
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-04 15:54:36 +01:00
6d5b8e2d96 Fix label syntax;Add retries;Sort list;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-04 12:00:10 +01:00
a020ac0e15 Fix marker collission;Add missing key
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-03 15:49:38 +01:00
f74d94a5e0 Update hypervisor details;Upgrade components;Housekeeping;Add decom tasks;Prevent configuration reset #2;Add morefid label
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-03 13:11:54 +01:00
d874da0cb3 Prevent configuration reset;Fix query
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-02 21:52:57 +01:00
07e95d82a2 Remove debugging
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-02 13:22:45 +01:00
5aecf61a01 Reorder ingress configuration tasks;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-01 20:07:04 +01:00
be4b6177f9 Revert CAPV image
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-01 12:17:53 +01:00
aacfbfc2fa Upgrade versions;Add delay;Housekeeping;Fix indentation
All checks were successful
continuous-integration/drone/push Build is passing
2023-02-01 10:54:47 +01:00
0c44f1fd54 Add missing key
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-31 15:23:48 +01:00
e5908fde1c Add retries to preflight check;Move/refactor tasks to front
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-31 13:42:26 +01:00
c793ced9f3 Remove redundant tasks;Add readycheck;Housekeeping;Add further upgrade tasks
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-30 16:24:37 +01:00
2870041530 Fix typo;Add debugging;Add missing key
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-30 14:26:28 +01:00
9887faa7c4 Refactor helm chart values
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-30 10:55:47 +01:00
51cabfa8d2 Add base64 filter
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-30 08:40:23 +01:00
95f5750291 Fix var reference;Fix port;Merge chart values
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-29 19:02:06 +01:00
e3ce60bcb4 Housekeeping;Generate root ca preemptively
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-29 12:43:55 +01:00
79b794dba2 Configure inotify limits;Filter updating image references
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-26 16:58:15 +01:00
907ec8bf3b Increase default garbage collection threshold
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-26 09:30:36 +01:00
a4db841a7a Attempt to properly match build w/ source #3
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-25 14:40:28 +01:00
3f2f19b36a Attempt to properly match build w/ source #2
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-25 14:35:54 +01:00
8a753daed7 Attempt to fix syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-25 14:30:58 +01:00
fdfe5f100b Attempt to properly match build w/ source
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-25 14:16:50 +01:00
4bcb1198f3 Add replica rebuild wait;Upgrade longhorn&harbor
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-25 12:36:21 +01:00
64c2c35383 Fix update strategy 2023-01-25 11:21:08 +01:00
e21b11a37a Fix var reference;Housekeeping;Improve UX 2023-01-25 10:28:28 +01:00
100 changed files with 2783 additions and 857 deletions

View File

@@ -1,87 +0,0 @@
kind: pipeline
type: kubernetes
name: 'Packer Build'
volumes:
- name: output
claim:
name: flexvolsmb-drone-output
- name: scratch
claim:
name: flexvolsmb-drone-scratch
steps:
- name: Debugging information
image: bv11-cr01.bessems.eu/library/packer-extended
commands:
- ansible --version
- ovftool --version
- packer --version
- yamllint --version
- name: Kubernetes Bootstrap Appliance
image: bv11-cr01.bessems.eu/library/packer-extended
pull: always
commands:
- |
sed -i -e "s/<<img-password>>/$${SSH_PASSWORD}/g" \
packer/preseed/UbuntuServer22.04/user-data
- |
yamllint -d "{extends: relaxed, rules: {line-length: disable}}" \
ansible \
packer/preseed/UbuntuServer22.04/user-data \
scripts
- |
ansible-galaxy install \
-r ansible/requirements.yml
- |
packer init -upgrade \
./packer
- |
packer validate \
-var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \
-var repo_username=$${REPO_USERNAME} \
-var repo_password=$${REPO_PASSWORD} \
-var vsphere_password=$${VSPHERE_PASSWORD} \
-var ssh_password=$${SSH_PASSWORD} \
./packer
- |
packer build \
-on-error=cleanup -timestamp-ui \
-var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \
-var repo_username=$${REPO_USERNAME} \
-var repo_password=$${REPO_PASSWORD} \
-var vsphere_password=$${VSPHERE_PASSWORD} \
-var ssh_password=$${SSH_PASSWORD} \
./packer
environment:
VSPHERE_PASSWORD:
from_secret: vsphere_password
SSH_PASSWORD:
from_secret: ssh_password
REPO_USERNAME:
from_secret: repo_username
REPO_PASSWORD:
from_secret: repo_password
# PACKER_LOG: 1
volumes:
- name: output
path: /output
- name: Remove temporary resources
image: bv11-cr01.bessems.eu/library/packer-extended
commands:
- |
pwsh -file scripts/Remove-Resources.ps1 \
-VMName $DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \
-VSphereFQDN 'bv11-vc.bessems.lan' \
-VSphereUsername 'administrator@vsphere.local' \
-VSpherePassword $${VSPHERE_PASSWORD}
environment:
VSPHERE_PASSWORD:
from_secret: vsphere_password
volumes:
- name: scratch
path: /scratch
when:
status:
- success
- failure

View File

@@ -0,0 +1,145 @@
name: Container & Helm chart
on: [push]
jobs:
linting:
name: Linting
runs-on: dind-rootless
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: yamllint
uses: bewuethr/yamllint-action@v1
with:
config-file: .yamllint.yaml
semrel_dryrun:
name: Semantic Release (Dry-run)
runs-on: dind-rootless
outputs:
version: ${{ steps.sem_rel.outputs.version }}
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: 20
- name: Install dependencies
run: |
npm install \
semantic-release \
@semantic-release/commit-analyzer \
@semantic-release/exec
- name: Semantic Release (dry-run)
id: sem_rel
run: |
npx semantic-release \
--package @semantic-release/exec \
--package semantic-release \
--branches ${{ gitea.refname }} \
--tag-format 'v${version}' \
--dry-run \
--plugins @semantic-release/commit-analyzer,@semantic-release/exec \
--analyzeCommits @semantic-release/commit-analyzer \
--verifyRelease @semantic-release/exec \
--verifyReleaseCmd 'echo "version=${nextRelease.version}" >> $GITHUB_OUTPUT'
env:
GIT_CREDENTIALS: ${{ secrets.GIT_USERNAME }}:${{ secrets.GIT_APIKEY }}
- name: Assert semantic release output
run: |
[[ -z "${{ steps.sem_rel.outputs.version }}" ]] && {
echo 'No release tag - exiting'; exit 1
} || {
echo 'Release tag set correctly: ${{ steps.sem_rel.outputs.version }}'; exit 0
}
build_image:
name: Kubernetes Bootstrap Appliance
container: code.spamasaurus.com/djpbessems/packer-extended:1.3.0
runs-on: dind-rootless
needs: [semrel_dryrun, linting]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Parse Kubernetes version
uses: mikefarah/yq@master
id: get_k8sversion
with:
cmd: yq '.components.clusterapi.workload.version.k8s' ansible/vars/metacluster.yml
- name: Set up packer
uses: hashicorp/setup-packer@main
id: setup
with:
version: "latest"
- name: Prepare build environment
id: init
run: |
packer init -upgrade ./packer
ansible-galaxy collection install \
-r ansible/requirements.yml \
-p ./ansible/collections
echo "BUILD_COMMIT=$(echo ${{ gitea.sha }} | cut -c 1-10)" >> $GITHUB_ENV
echo "BUILD_SUFFIX=$(openssl rand -hex 3)" >> $GITHUB_ENV
- name: Validate packer template files
id: validate
run: |
packer validate \
-only=vsphere-iso.bootstrap \
-var vm_name=${{ gitea.run_number }}-${BUILD_COMMIT}-${BUILD_SUFFIX} \
-var docker_username=${{ secrets.DOCKER_USERNAME }} \
-var docker_password=${{ secrets.DOCKER_PASSWORD }} \
-var repo_username=${{ secrets.REPO_USERNAME }} \
-var repo_password=${{ secrets.REPO_PASSWORD }} \
-var ssh_password=${{ secrets.SSH_PASSWORD }} \
-var hv_password=${{ secrets.HV_PASSWORD }} \
-var k8s_version=${{ steps.get_k8sversion.outputs.result }} \
-var appliance_version=${{ needs.semrel_dryrun.outputs.version }} \
./packer
- name: Build packer template
run: |
packer build \
-on-error=cleanup -timestamp-ui \
-only=vsphere-iso.bootstrap \
-var vm_name=${{ gitea.run_number }}-${BUILD_COMMIT}-${BUILD_SUFFIX} \
-var docker_username=${{ secrets.DOCKER_USERNAME }} \
-var docker_password=${{ secrets.DOCKER_PASSWORD }} \
-var repo_username=${{ secrets.REPO_USERNAME }} \
-var repo_password=${{ secrets.REPO_PASSWORD }} \
-var ssh_password=${{ secrets.SSH_PASSWORD }} \
-var hv_password=${{ secrets.HV_PASSWORD }} \
-var k8s_version=${{ steps.get_k8sversion.outputs.result }} \
-var appliance_version=${{ needs.semrel_dryrun.outputs.version }} \
./packer
# env:
# PACKER_LOG: 1
# semrel:
# name: Semantic Release
# runs-on: dind-rootless
# needs: [build_container, build_chart]
# steps:
# - name: Check out repository code
# uses: actions/checkout@v3
# - name: Setup Node
# uses: actions/setup-node@v3
# with:
# node-version: 20
# - name: Install dependencies
# run: |
# npm install \
# semantic-release \
# @semantic-release/changelog \
# @semantic-release/commit-analyzer \
# @semantic-release/git \
# @semantic-release/release-notes-generator
# - name: Semantic Release
# run: |
# npx semantic-release \
# --branches ${{ gitea.refname }} \
# --tag-format 'v${version}' \
# --plugins @semantic-release/commit-analyzer,@semantic-release/release-notes-generator,@semantic-release/changelog,@semantic-release/git
# env:
# GIT_CREDENTIALS: ${{ secrets.GIT_USERNAME }}:${{ secrets.GIT_APIKEY }}

4
.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
**/hv.vcenter.yaml
**/ova.bootstrap.yaml
**/pb.secrets.yaml
**/pwdfile

4
.yamllint.yaml Normal file
View File

@@ -0,0 +1,4 @@
extends: relaxed
rules:
line-length: disable

View File

@@ -3,6 +3,7 @@
gather_facts: false
vars_files:
- metacluster.yml
- workloadcluster.yml
become: true
roles:
- os

View File

@@ -1,4 +1,6 @@
collections:
- name: https://github.com/ansible-collections/ansible.posix
type: git
- name: https://github.com/ansible-collections/ansible.utils
type: git
- name: https://github.com/ansible-collections/community.general

View File

@@ -1,4 +1,4 @@
- name: Parse manifests for container images
- name: Parse Cluster-API manifests for container images
ansible.builtin.shell:
# This set of commands is necessary to deal with multi-line scalar values
# eg.:
@@ -9,18 +9,33 @@
cat {{ item.dest }} | yq --no-doc eval '.. | .image? | select(.)' | awk '!/ /';
cat {{ item.dest }} | yq eval '.data.data' | yq --no-doc eval '.. | .image? | select(.)';
cat {{ item.dest }} | yq --no-doc eval '.. | .files? | with_entries(select(.value.path == "*.yaml")).[0].content' | awk '!/null/' | yq eval '.. | .image? | select(.)'
register: parsedmanifests
register: clusterapi_parsedmanifests
loop: "{{ clusterapi_manifests.results }}"
loop_control:
label: "{{ item.dest | basename }}"
- name: Parse helm charts for container images
- name: Parse pinniped manifest for container images
ansible.builtin.shell:
cmd: >-
cat {{ pinniped_manifest.dest }} | yq --no-doc eval '.. | .image? | select(.)' | awk '!/ /';
register: pinniped_parsedmanifest
- name: Parse metacluster helm charts for container images
ansible.builtin.shell:
cmd: "{{ item.value.helm.parse_logic }}"
chdir: /opt/metacluster/helm-charts/{{ item.key }}
register: chartimages
register: chartimages_metacluster
when: item.value.helm is defined
loop: "{{ lookup('ansible.builtin.dict', components) }}"
loop: "{{ query('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
- name: Parse workloadcluster helm charts for container images
ansible.builtin.shell:
cmd: "{{ item.value.parse_logic }}"
chdir: /opt/workloadcluster/helm-charts/{{ item.value.namespace }}/{{ item.key }}
register: chartimages_workloadcluster
loop: "{{ query('ansible.builtin.dict', downstream.helm_charts) }}"
loop_control:
label: "{{ item.key }}"
@@ -29,14 +44,25 @@
containerimages_{{ item.source }}: "{{ item.results }}"
loop:
- source: charts
results: "{{ chartimages | json_query('results[*].stdout_lines') | select() | flatten | list }}"
results: "{{ (chartimages_metacluster | json_query('results[*].stdout_lines')) + (chartimages_workloadcluster | json_query('results[*].stdout_lines')) | select() | flatten | list }}"
- source: kubeadm
results: "{{ kubeadmimages.stdout_lines }}"
- source: manifests
results: "{{ parsedmanifests | json_query('results[*].stdout_lines') | select() | flatten | list }}"
- source: clusterapi
results: "{{ clusterapi_parsedmanifests | json_query('results[*].stdout_lines') | select() | flatten | list }}"
- source: pinniped
results: "{{ pinniped_parsedmanifest.stdout_lines }}"
loop_control:
label: "{{ item.source }}"
- name: Log in to container registry
ansible.builtin.shell:
cmd: >-
skopeo login \
docker.io \
--username={{ docker_username }} \
--password={{ docker_password }}
no_log: true
- name: Pull and store containerimages
ansible.builtin.shell:
cmd: >-
@@ -46,4 +72,4 @@
docker://{{ item }} \
docker-archive:./{{ ( item | regex_findall('[^/:]+'))[-2] }}_{{ lookup('ansible.builtin.password', '/dev/null length=5 chars=ascii_lowercase,digits seed={{ item }}') }}.tar:{{ item }}
chdir: /opt/metacluster/container-images
loop: "{{ (containerimages_charts + containerimages_kubeadm + containerimages_manifests + dependencies.container_images) | flatten | unique | sort }}"
loop: "{{ (containerimages_charts + containerimages_kubeadm + containerimages_clusterapi + containerimages_pinniped + dependencies.container_images) | flatten | unique | sort }}"

View File

@@ -1,5 +0,0 @@
- name: Clone git repository
ansible.builtin.git:
repo: "{{ platform.gitops.repository.uri }}"
version: "{{ platform.gitops.repository.revision }}"
dest: /opt/metacluster/git-repositories/gitops

View File

@@ -3,17 +3,29 @@
name: "{{ item.name }}"
repo_url: "{{ item.url }}"
state: present
loop: "{{ platform.helm_repositories }}"
loop: "{{ platform.helm_repositories + downstream.helm_repositories }}"
- name: Fetch helm charts
- name: Fetch helm charts for metacluster
ansible.builtin.command:
cmd: helm fetch {{ item.value.helm.chart }} --untar --version {{ item.value.helm.version }}
chdir: /opt/metacluster/helm-charts
when: item.value.helm is defined
register: helmcharts
loop: "{{ lookup('ansible.builtin.dict', components) }}"
register: helmcharts_metacluster
loop: "{{ query('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
retries: 5
delay: 5
until: helmcharts is not failed
until: helmcharts_metacluster is not failed
- name: Fetch helm charts for workloadcluster
ansible.builtin.command:
cmd: helm fetch {{ item.value.chart }} --untardir ./{{ item.value.namespace }} --untar --version {{ item.value.version }}
chdir: /opt/workloadcluster/helm-charts
register: helmcharts_workloadcluster
loop: "{{ query('ansible.builtin.dict', downstream.helm_charts) }}"
loop_control:
label: "{{ item.key }}"
retries: 5
delay: 5
until: helmcharts_workloadcluster is not failed

View File

@@ -21,7 +21,7 @@
- name: Download K3s install script
ansible.builtin.get_url:
url: https://get.k3s.io
url: https://raw.githubusercontent.com/k3s-io/k3s/{{ platform.k3s.version | urlencode }}/install.sh
dest: /opt/metacluster/k3s/install.sh
owner: root
group: root

View File

@@ -12,10 +12,12 @@
- /opt/metacluster/cluster-api/infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}
- /opt/metacluster/cluster-api/ipam-in-cluster/{{ components.clusterapi.management.version.ipam_incluster }}
- /opt/metacluster/container-images
- /opt/metacluster/git-repositories/gitops
- /opt/metacluster/git-repositories
- /opt/metacluster/helm-charts
- /opt/metacluster/k3s
- /opt/metacluster/kube-vip
- /opt/metacluster/pinniped
- /opt/workloadcluster/helm-charts
- /opt/workloadcluster/node-templates
- /var/lib/rancher/k3s/agent/images
- /var/lib/rancher/k3s/server/manifests
@@ -23,8 +25,7 @@
- import_tasks: dependencies.yml
- import_tasks: k3s.yml
- import_tasks: helm.yml
- import_tasks: git.yml
# - import_tasks: git.yml
- import_tasks: manifests.yml
- import_tasks: kubeadm.yml
- import_tasks: containerimages.yml
- import_tasks: nodetemplates.yml

View File

@@ -1,26 +1,55 @@
- block:
- name: Aggregate chart_values into dict
- name: Aggregate meta-cluster chart_values into dict
ansible.builtin.set_fact:
chart_values: "{{ chart_values | default({}) | combine({ (item.key | regex_replace('[^A-Za-z0-9]', '')): { 'chart_values': (item.value.helm.chart_values | from_yaml) } }) }}"
metacluster_chartvalues: "{{ metacluster_chartvalues | default({}) | combine({ item.key: { 'chart_values': (item.value.helm.chart_values | from_yaml) } }) }}"
when: item.value.helm.chart_values is defined
loop: "{{ lookup('ansible.builtin.dict', components) }}"
loop: "{{ query('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
- name: Write dict to vars_file
- name: Combine and write dict to vars_file
ansible.builtin.copy:
dest: /opt/firstboot/ansible/vars/metacluster.yml
content: >-
{{
{ 'components': (
chart_values |
combine({ 'clusterapi': components.clusterapi }) |
combine({ 'kubevip' : components.kubevip }) )
metacluster_chartvalues |
combine({ 'clusterapi' : components['clusterapi'] }) |
combine({ 'kubevip' : components['kubevip'] }) |
combine({ 'localuserauthenticator': components['pinniped']['local-user-authenticator'] })),
'appliance': {
'version': (applianceversion)
}
} | to_nice_yaml(indent=2, width=4096)
}}
- name: Download ClusterAPI manifests
- name: Aggregate workload-cluster chart_values into dict
ansible.builtin.set_fact:
workloadcluster_chartvalues: |
{{
workloadcluster_chartvalues | default({}) | combine({
item.key: {
'chart_values': (item.value.chart_values | default('') | from_yaml),
'extra_manifests': (item.value.extra_manifests | default([])),
'namespace': (item.value.namespace)
}
})
}}
loop: "{{ query('ansible.builtin.dict', downstream.helm_charts) }}"
loop_control:
label: "{{ item.key }}"
- name: Write dict to vars_file
ansible.builtin.copy:
dest: /opt/firstboot/ansible/vars/workloadcluster.yml
content: >-
{{
{ 'downstream_components': ( workloadcluster_chartvalues )
} | to_nice_yaml(indent=2, width=4096)
}}
- name: Download Cluster-API manifests
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: /opt/metacluster/cluster-api/{{ item.dest }}
@@ -65,6 +94,12 @@
delay: 5
until: clusterapi_manifests is not failed
- name: Update cluster-template with image tags
ansible.builtin.replace:
dest: /opt/metacluster/cluster-api/infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/cluster-template.yaml
regexp: ':\${CPI_IMAGE_K8S_VERSION}'
replace: ":{{ components.clusterapi.management.version.cpi_vsphere }}"
- name: Download kube-vip RBAC manifest
ansible.builtin.get_url:
url: https://kube-vip.io/manifests/rbac.yaml
@@ -74,6 +109,22 @@
delay: 5
until: kubevip_manifest is not failed
- name: Download pinniped local-user-authenticator manifest
ansible.builtin.get_url:
url: https://get.pinniped.dev/{{ components.pinniped['local-user-authenticator'].version }}/install-local-user-authenticator.yaml
dest: /opt/metacluster/pinniped/local-user-authenticator.yaml
register: pinniped_manifest
retries: 5
delay: 5
until: pinniped_manifest is not failed
- name: Trim image hash from manifest
ansible.builtin.replace:
path: /opt/metacluster/pinniped/local-user-authenticator.yaml
regexp: '([ ]*image: .*)@.*'
replace: '\1'
no_log: true
# - name: Inject manifests
# ansible.builtin.template:
# src: "{{ item.type }}.j2"
@@ -81,6 +132,6 @@
# owner: root
# group: root
# mode: 0600
# loop: "{{ lookup('ansible.builtin.dict', components) | map(attribute='value.manifests') | list | select('defined') | flatten }}"
# loop: "{{ query('ansible.builtin.dict', components) | map(attribute='value.manifests') | list | select('defined') | flatten }}"
# loop_control:
# label: "{{ item.type + '/' + item.name }}"
# label: "{{ item.type ~ '/' ~ item.name }}"

View File

@@ -1,4 +0,0 @@
- name: Download node-template image
ansible.builtin.uri:
url: "{{ components.clusterapi.workload.node_template.url }}"
dest: /opt/workloadcluster/node-templates/{{ components.clusterapi.workload.node_template.url | basename}}

View File

@@ -2,9 +2,13 @@
- hosts: 127.0.0.1
connection: local
gather_facts: true
vars:
# Needed by some templating in various tasks
_newline: "\n"
vars_files:
- defaults.yml
- metacluster.yml
- workloadcluster.yml
# become: true
roles:
- vapp

View File

@@ -1,14 +0,0 @@
import netaddr
def netaddr_iter_iprange(ip_start, ip_end):
return [str(ip) for ip in netaddr.iter_iprange(ip_start, ip_end)]
class FilterModule(object):
''' Ansible filter. Interface to netaddr methods.
https://pypi.org/project/netaddr/
'''
def filters(self):
return {
'netaddr_iter_iprange': netaddr_iter_iprange
}

View File

@@ -0,0 +1,176 @@
- block:
- name: Install dex
kubernetes.core.helm:
name: dex
chart_ref: /opt/metacluster/helm-charts/dex
release_namespace: dex
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['dex'].chart_values }}"
- block:
- name: Install pinniped local-user-authenticator
kubernetes.core.k8s:
src: /opt/metacluster/pinniped/local-user-authenticator.yaml
state: present
kubeconfig: "{{ kubeconfig.path }}"
- name: Create local-user-authenticator accounts
kubernetes.core.k8s:
template: secret.j2
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
name: "{{ item.username }}"
namespace: local-user-authenticator
type: ''
data:
- key: groups
value: "{{ 'group1,group2' | b64encode }}"
- key: passwordHash
value: "{{ item.password | b64encode }}"
loop: "{{ components['localuserauthenticator'].users }}"
- block:
- name: Install pinniped chart
kubernetes.core.helm:
name: pinniped
chart_ref: /opt/metacluster/helm-charts/pinniped
release_namespace: pinniped-supervisor
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['pinniped'].chart_values }}"
- name: Add ingress for supervisor
kubernetes.core.k8s:
template: "{{ item.kind }}.j2"
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
name: "{{ item.name }}"
namespace: "{{ item.namespace }}"
spec: "{{ item.spec }}"
loop:
- kind: ingressroute
name: pinniped-supervisor
namespace: pinniped-supervisor
spec: |2
entryPoints:
- web
- websecure
routes:
- kind: Rule
match: Host(`auth.{{ vapp['metacluster.fqdn'] }}`)
services:
- kind: Service
name: pinniped-supervisor
namespace: pinniped-supervisor
port: 443
scheme: https
serversTransport: pinniped-supervisor
- kind: serverstransport
name: pinniped-supervisor
namespace: pinniped-supervisor
spec: |2
insecureSkipVerify: true
serverName: auth.{{ vapp['metacluster.fqdn'] }}
loop_control:
label: "{{ item.kind ~ '/' ~ item.name ~ ' (' ~ item.namespace ~ ')' }}"
- name: Ensure pinniped API availability
ansible.builtin.uri:
url: https://auth.{{ vapp['metacluster.fqdn'] }}/healthz
method: GET
register: api_readycheck
until:
- api_readycheck.status == 200
- api_readycheck.msg is search("OK")
retries: "{{ playbook.retries }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
# TODO: Migrate to step-ca
- name: Initialize tempfile
ansible.builtin.tempfile:
state: directory
register: certificate
- name: Create private key (RSA, 4096 bits)
community.crypto.openssl_privatekey:
path: "{{ certificate.path }}/certificate.key"
- name: Create self-signed certificate
community.crypto.x509_certificate:
path: "{{ certificate.path }}/certificate.crt"
privatekey_path: "{{ certificate.path }}/certificate.key"
provider: selfsigned
- name: Store self-signed certificate for use by pinniped supervisor
kubernetes.core.k8s:
template: secret.j2
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
name: pinniped-supervisor-tls
namespace: pinniped-supervisor
type: kubernetes.io/tls
data:
- key: tls.crt
value: "{{ lookup('ansible.builtin.file', certificate.path ~ '/certificate.crt') | b64encode }}"
- key: tls.key
value: "{{ lookup('ansible.builtin.file', certificate.path ~ '/certificate.key') | b64encode }}"
# TODO: Migrate to step-ca
- name: Create pinniped resources
kubernetes.core.k8s:
template: "{{ item.kind }}.j2"
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
name: "{{ item.name }}"
namespace: "{{ item.namespace }}"
type: "{{ item.type | default('') }}"
data: "{{ item.data | default(omit) }}"
spec: "{{ item.spec | default(omit) }}"
loop:
- kind: oidcidentityprovider
name: dex-staticpasswords
namespace: pinniped-supervisor
spec: |2
issuer: https://idps.{{ vapp['metacluster.fqdn'] }}
tls:
certificateAuthorityData: "{{ (stepca_cm_certs.resources[0].data['intermediate_ca.crt'] ~ _newline ~ stepca_cm_certs.resources[0].data['root_ca.crt']) | b64encode }}"
authorizationConfig:
additionalScopes: [offline_access, groups, email]
allowPasswordGrant: false
claims:
username: email
groups: groups
client:
secretName: dex-clientcredentials
- kind: secret
name: dex-clientcredentials
namespace: pinniped-supervisor
type: secrets.pinniped.dev/oidc-client
data:
- key: clientID
value: "{{ 'pinniped-supervisor' | b64encode }}"
- key: clientSecret
value: "{{ lookup('ansible.builtin.password', '/dev/null length=64 chars=ascii_lowercase,digits seed=' ~ vapp['metacluster.fqdn']) | b64encode }}"
- kind: federationdomain
name: metacluster-sso
namespace: pinniped-supervisor
spec: |2
issuer: https://auth.{{ vapp['metacluster.fqdn'] }}/sso
tls:
secretName: pinniped-supervisor-tls
loop_control:
label: "{{ item.kind ~ '/' ~ item.name }}"

View File

@@ -1,15 +1,52 @@
- block:
- name: Import generated values file into dictionary and combine with custom values
ansible.builtin.set_fact:
values_initial: |
{{
lookup('ansible.builtin.file', stepconfig.path) | from_yaml |
combine( components['step-certificates'].chart_values | from_yaml, recursive=True, list_merge='append')
}}
- name: Duplicate default provisioner with modified claims
ansible.builtin.set_fact:
values_new: |
{{
values_initial |
combine({'inject':{'config':{'files':{'ca.json':{'authority': {'provisioners': [
values_initial.inject.config.files['ca.json'].authority.provisioners[0] | combine({'name':'long-lived', 'claims':{'maxTLSCertDuration':'87660h'}})
]}}}}}}, list_merge='append_rp', recursive=true)
}}
# We're facing several bugs or niche cases that result in incorrect output, despite being behaviour by design:
# - Ansible's `to_yaml` filter, sees `\n` escape sequences in PEM certificate strings and correctly converts them to actual newlines - without any way to prevent this
# So we cannot rely on Ansible to (re)create the helm chart values file
# - Python's yaml interpreter sees strings with a value of `y` as short for `yes` or `true`, even when that string is a key name.
# So we cannot use a straightforward yaml document as input for the Ansible helm module (which is written in Python)
#
# Lets explain the following workaround steps:
# - First we convert the dictionary to a json-object (through Ansible), so that yq can read it
# - Second we convert the json-object in its entirety to yaml (through yq), so that yq can actually manipulate it.
# - Finally, we take one specific subkey's contents (list of dictionaries) and iterate over each with the following steps (with `map`):
# - Convert the dictionary to json with `tojson`
# - Remove newlines (and spaces) with `sub`
# - Remove outer quotes (') with `sed`
- name: Save updated values file
ansible.builtin.shell:
cmd: |
echo '{{ values_new | to_nice_json }}' | yq -p json -o yaml | yq e '.inject.config.files["ca.json"].authority.provisioners |= map(tojson | sub("[\n ]";""))' | sed -e "s/- '/- /;s/'$//" > {{ stepconfig.path }}
- name: Install step-ca chart
kubernetes.core.helm:
name: step-certificates
chart_ref: /opt/metacluster/helm-charts/step-certificates
release_namespace: step-ca
create_namespace: yes
# Unable to use REST api based readycheck due to missing ingress
wait: yes
create_namespace: true
# Unable to use REST api based readycheck due to lack of ingress
wait: true
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.stepcertificates.chart_values }}"
values_files:
- "{{ stepconfig.path }}"
- name: Retrieve configmap w/ root certificate
kubernetes.core.k8s_info:
@@ -27,6 +64,7 @@
kubeconfig: "{{ kubeconfig.path }}"
loop:
- argo-cd
- gitea
# - kube-system
- name: Store root certificate in namespaced configmaps/secrets
@@ -40,6 +78,7 @@
namespace: "{{ item.namespace }}"
annotations: "{{ item.annotations | default('{}') | indent(width=4, first=True) }}"
labels: "{{ item.labels | default('{}') | indent(width=4, first=True) }}"
type: "{{ item.type | default('') }}"
data: "{{ item.data }}"
loop:
- name: argocd-tls-certs-cm
@@ -55,6 +94,12 @@
data:
- key: git.{{ vapp['metacluster.fqdn'] }}
value: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] }}"
- name: step-certificates-certs
namespace: gitea
kind: secret
data:
- key: ca_chain.crt
value: "{{ (stepca_cm_certs.resources[0].data['intermediate_ca.crt'] ~ _newline ~ stepca_cm_certs.resources[0].data['root_ca.crt']) | b64encode }}"
- name: step-certificates-certs
namespace: kube-system
kind: secret
@@ -62,7 +107,7 @@
- key: root_ca.crt
value: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] | b64encode }}"
loop_control:
label: "{{ item.kind + '/' + item.name + ' (' + item.namespace + ')' }}"
label: "{{ item.kind ~ '/' ~ item.name ~ ' (' ~ item.namespace ~ ')' }}"
- name: Configure step-ca passthrough ingress
ansible.builtin.template:
@@ -75,7 +120,7 @@
_template:
name: step-ca
namespace: step-ca
config: |2
spec: |2
entryPoints:
- websecure
routes:
@@ -99,30 +144,23 @@
env:
- name: LEGO_CA_CERTIFICATES
value: /step-ca/root_ca.crt
marker: ' # {mark} ANSIBLE MANAGED BLOCK'
marker: ' # {mark} ANSIBLE MANAGED BLOCK [rootca]'
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers
- name: Retrieve step-ca configuration
kubernetes.core.k8s_info:
kind: ConfigMap
name: step-certificates-config
namespace: step-ca
kubeconfig: "{{ kubeconfig.path }}"
register: stepca_cm_config
- name: Install root CA in system truststore
ansible.builtin.shell:
cmd: >-
step ca bootstrap \
--ca-url=https://ca.{{ vapp['metacluster.fqdn'] }} \
--fingerprint={{ stepca_cm_config.resources[0].data['defaults.json'] | from_json | json_query('fingerprint') }} \
--install \
--force
update-ca-certificates
- name: Ensure step-ca API availability
ansible.builtin.uri:
url: https://ca.{{ vapp['metacluster.fqdn'] }}/health
method: GET
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'ok'
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:

View File

@@ -5,10 +5,10 @@
name: gitea
chart_ref: /opt/metacluster/helm-charts/gitea
release_namespace: gitea
create_namespace: yes
wait: no
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.gitea.chart_values }}"
values: "{{ components['gitea'].chart_values }}"
- name: Ensure gitea API availability
ansible.builtin.uri:
@@ -19,7 +19,7 @@
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'pass'
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
- name: Configure additional SSH ingress
ansible.builtin.template:
@@ -32,7 +32,7 @@
_template:
name: gitea-ssh
namespace: gitea
config: |2
spec: |2
entryPoints:
- ssh
routes:
@@ -55,6 +55,7 @@
force_basic_auth: yes
body:
name: token_init_{{ lookup('password', '/dev/null length=5 chars=ascii_letters,digits') }}
scopes: ["write:user","write:organization"]
register: gitea_api_token
- name: Retrieve existing gitea configuration
@@ -107,6 +108,12 @@
Authorization: token {{ gitea_api_token.json.sha1 }}
body: "{{ item.body }}"
loop:
- organization: mc
body:
name: GitOps.ClusterAPI
auto_init: true
default_branch: main
description: ClusterAPI manifests
- organization: mc
body:
name: GitOps.Config
@@ -115,20 +122,26 @@
description: GitOps manifests
- organization: wl
body:
name: Template.GitOps.Config
# auto_init: true
# default_branch: main
name: GitOps.Config
auto_init: true
default_branch: main
description: GitOps manifests
- organization: wl
body:
name: ClusterAccess.Store
auto_init: true
default_branch: main
description: Kubeconfig files
loop_control:
label: "{{ item.organization + '/' + item.body.name }}"
label: "{{ item.organization ~ '/' ~ item.body.name }}"
- name: Rebase/Push source gitops repository
ansible.builtin.shell:
cmd: |
git config --local http.sslVerify false
git remote set-url origin https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.Config.git
git push
chdir: /opt/metacluster/git-repositories/gitops
# - name: Rebase/Push source gitops repository
# ansible.builtin.shell:
# cmd: |
# git config --local http.sslVerify false
# git remote set-url origin https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.Config.git
# git push
# chdir: /opt/metacluster/git-repositories/gitops
when: (gitea_existing_config.json is undefined) or (gitea_existing_config.json.data | length == 0)

View File

@@ -5,10 +5,10 @@
name: argo-cd
chart_ref: /opt/metacluster/helm-charts/argo-cd
release_namespace: argo-cd
create_namespace: yes
wait: no
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.argocd.chart_values }}"
values: "{{ components['argo-cd'].chart_values }}"
- name: Ensure argo-cd API availability
ansible.builtin.uri:
@@ -18,7 +18,7 @@
until:
- api_readycheck.json.Version is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
- name: Generate argo-cd API token
ansible.builtin.uri:
@@ -39,24 +39,29 @@
mode: 0600
vars:
_template:
name: argocd-gitrepo-metacluster
name: gitrepo-mc-gitopsconfig
namespace: argo-cd
uid: "{{ lookup('ansible.builtin.password', '/dev/null length=5 chars=ascii_lowercase,digits seed=inventory_hostname') }}"
privatekey: "{{ lookup('ansible.builtin.file', '~/.ssh/git_rsa_id') | indent(4, true) }}"
url: https://git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.Config.git
notify:
- Apply manifests
- name: Create applicationset
ansible.builtin.template:
src: applicationset.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.application.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: argocd-applicationset-metacluster
namespace: argo-cd
application:
name: applicationset-metacluster
namespace: argo-cd
cluster:
url: https://kubernetes.default.svc
repository:
url: https://git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.Config.git
revision: main
notify:
- Apply manifests

View File

@@ -1,3 +1,25 @@
- name: Reconfigure traefik container for persistence
ansible.builtin.blockinfile:
path: /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
block: |2
deployment:
initContainers:
- name: volume-permissions
image: busybox:1
command: ["sh", "-c", "touch /data/acme.json; chown 65532 /data/acme.json; chmod -v 600 /data/acme.json"]
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
volumeMounts:
- name: data
mountPath: /data
persistence:
enabled: true
marker: ' # {mark} ANSIBLE MANAGED BLOCK [persistence]'
notify:
- Apply manifests
- name: Configure traefik dashboard ingress
ansible.builtin.template:
src: ingressroute.j2
@@ -9,7 +31,7 @@
_template:
name: traefik-dashboard
namespace: kube-system
config: |2
spec: |2
entryPoints:
- web
- websecure

View File

@@ -1,7 +1,7 @@
- name: Configure fallback name resolution
ansible.builtin.lineinfile:
path: /etc/hosts
line: "{{ vapp['guestinfo.ipaddress'] }} {{ item + '.' + vapp['metacluster.fqdn'] }}"
line: "{{ vapp['guestinfo.ipaddress'] }} {{ item ~ '.' ~ vapp['metacluster.fqdn'] }}"
state: present
loop:
# TODO: Make this list dynamic
@@ -11,3 +11,90 @@
- ingress
- registry
- storage
- name: Create step-ca config dictionary
ansible.builtin.set_fact:
stepconfig: "{{ { 'path': ansible_env.HOME ~ '/.step/config/values.yaml' } }}"
- name: Create step-ca target folder
ansible.builtin.file:
path: "{{ stepconfig.path | dirname }}"
state: directory
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: stepca_password
- name: Store password in tempfile
ansible.builtin.copy:
dest: "{{ stepca_password.path }}"
content: "{{ vapp['metacluster.password'] }}"
no_log: true
- name: Generate step-ca helm chart values (including root CA certificate)
ansible.builtin.shell:
cmd: >-
step ca init \
--helm \
--deployment-type=standalone \
--name=ca.{{ vapp['metacluster.fqdn'] }} \
--dns=ca.{{ vapp['metacluster.fqdn'] }} \
--dns=step-certificates.step-ca.svc.cluster.local \
--dns=127.0.0.1 \
--address=:9000 \
--provisioner=admin \
--acme \
--password-file={{ stepca_password.path }} | tee {{ stepconfig.path }}
creates: "{{ stepconfig.path }}"
- name: Cleanup tempfile
ansible.builtin.file:
path: "{{ stepca_password.path }}"
state: absent
when: stepca_password.path is defined
- name: Store root CA certificate
ansible.builtin.copy:
dest: /usr/local/share/ca-certificates/root_ca.crt
content: "{{ (lookup('ansible.builtin.file', stepconfig.path) | from_yaml).inject.certificates.root_ca }}"
- name: Update certificate truststore
ansible.builtin.command:
cmd: update-ca-certificates
- name: Extract container images (for idempotency purposes)
ansible.builtin.unarchive:
src: /opt/metacluster/container-images/image-tarballs.tgz
dest: /opt/metacluster/container-images
remote_src: no
when:
- lookup('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tgz') is match('.*image-tarballs.tgz')
- name: Get all stored fully qualified container image names
ansible.builtin.shell:
cmd: >-
skopeo list-tags \
--insecure-policy \
docker-archive:./{{ item | basename }} | \
jq -r '.Tags[0]'
chdir: /opt/metacluster/container-images
register: registry_artifacts
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[0]]) | unique | sort }}"
loop: "{{ registry_artifacts | json_query('results[*].stdout') | select | sort }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
registries: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"

View File

@@ -7,6 +7,7 @@
content: |
kubelet-arg:
- "config=/etc/rancher/k3s/kubelet.config"
- "image-gc-high-threshold=95"
- filename: /etc/rancher/k3s/kubelet.config
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
@@ -30,17 +31,6 @@
INSTALL_K3S_EXEC: "server --cluster-init --token {{ vapp['metacluster.token'] | trim }} --tls-san {{ vapp['metacluster.vip'] }} --disable local-storage --config /etc/rancher/k3s/config.yaml"
when: ansible_facts.services['k3s.service'] is undefined
- name: Debug possible taints on k3s node
ansible.builtin.shell:
cmd: >-
while true;
do
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers | awk '{print strftime("%H:%M:%S"),$0;fflush();}' >> /var/log/taintlog
sleep 1
done
async: 1800
poll: 0
- name: Ensure API availability
ansible.builtin.uri:
url: https://{{ vapp['guestinfo.ipaddress'] }}:6443/livez?verbose
@@ -50,21 +40,32 @@
register: api_readycheck
until: api_readycheck.json.apiVersion is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.medium }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"
- name: Install kubectl tab-completion
- name: Install tab-completion
ansible.builtin.shell:
cmd: kubectl completion bash | tee /etc/bash_completion.d/kubectl
cmd: |-
{{ item }} completion bash > /etc/bash_completion.d/{{ item }}
creates: /etc/bash_completion.d/{{ item }}
loop:
- kubectl
- helm
- step
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: kubeconfig
- name: Create kubeconfig dictionary
ansible.builtin.set_fact:
kubeconfig: "{{ { 'path': ansible_env.HOME ~ '/.kube/config' } }}"
- name: Create kubeconfig target folder
ansible.builtin.file:
path: "{{ kubeconfig.path | dirname }}"
state: directory
- name: Retrieve kubeconfig
ansible.builtin.command:
cmd: kubectl config view --raw
register: kubectl_config
no_log: true
- name: Store kubeconfig in tempfile
ansible.builtin.copy:
@@ -72,3 +73,19 @@
content: "{{ kubectl_config.stdout }}"
mode: 0600
no_log: true
- name: Add label to node object
kubernetes.core.k8s:
name: "{{ ansible_facts.nodename | lower }}"
kind: Node
state: patched
definition:
metadata:
labels:
ova.airgappedk8s/moref_id: "{{ moref_id }}"
kubeconfig: "{{ kubeconfig.path }}"
register: k8snode_patch
until:
- k8snode_patch.result.metadata.labels['ova.airgappedk8s/moref_id'] is defined
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"

View File

@@ -1,10 +1,13 @@
- import_tasks: init.yml
- import_tasks: k3s.yml
- import_tasks: assets.yml
- import_tasks: kube-vip.yml
- import_tasks: workflow.yml
- import_tasks: virtualip.yml
- import_tasks: metadata.yml
- import_tasks: storage.yml
- import_tasks: ingress.yml
- import_tasks: certauthority.yml
- import_tasks: registry.yml
- import_tasks: git.yml
- import_tasks: gitops.yml
- import_tasks: authentication.yml

View File

@@ -0,0 +1,57 @@
- block:
- name: Aggregate manifest-component versions into dictionary
ansible.builtin.set_fact:
manifest_versions: "{{ manifest_versions | default([]) + [ item | combine( {'type': 'manifest', 'id': index } ) ] }}"
loop:
- name: cluster-api
versions:
management:
base: "{{ components.clusterapi.management.version.base }}"
cert_manager: "{{ components.clusterapi.management.version.cert_manager }}"
infrastructure_vsphere: "{{ components.clusterapi.management.version.infrastructure_vsphere }}"
ipam_incluster: "{{ components.clusterapi.management.version.ipam_incluster }}"
cpi_vsphere: "{{ components.clusterapi.management.version.cpi_vsphere }}"
workload:
calico: "{{ components.clusterapi.workload.version.calico }}"
k8s: "{{ components.clusterapi.workload.version.k8s }}"
- name: kube-vip
version: "{{ components.kubevip.version }}"
loop_control:
label: "{{ item.name }}"
index_var: index
- name: Install json-server chart
kubernetes.core.helm:
name: json-server
chart_ref: /opt/metacluster/helm-charts/json-server
release_namespace: json-server
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: |
{{
components['json-server'].chart_values |
combine(
{ 'jsonServer': { 'seedData': { 'configInline': (
{ 'appliance': { "version": appliance.version }, 'components': manifest_versions, 'healthz': { 'status': 'running' } }
) | to_json } } }
)
}}
- name: Ensure json-server API availability
ansible.builtin.uri:
url: https://version.{{ vapp['metacluster.fqdn'] }}/healthz
method: GET
# This mock REST API -ironically- does not support json encoded body argument
body_format: raw
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'running'
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]

View File

@@ -5,10 +5,10 @@
name: harbor
chart_ref: /opt/metacluster/helm-charts/harbor
release_namespace: harbor
create_namespace: yes
wait: no
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.harbor.chart_values }}"
values: "{{ components['harbor'].chart_values }}"
- name: Ensure harbor API availability
ansible.builtin.uri:
@@ -19,7 +19,7 @@
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'healthy'
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
- name: Push images to registry
ansible.builtin.shell:
@@ -40,30 +40,9 @@
loop_control:
label: "{{ item | basename }}"
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
until: push_result is not failed
- name: Get all stored container images (=artifacts)
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/search?q=library
method: GET
register: registry_artifacts
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[1]]) | unique | sort }}"
loop: "{{ registry_artifacts.json.repository | json_query('[*].repository_name') }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
data: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no

View File

@@ -4,10 +4,10 @@
name: longhorn
chart_ref: /opt/metacluster/helm-charts/longhorn
release_namespace: longhorn-system
create_namespace: yes
wait: no
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.longhorn.chart_values }}"
values: "{{ components['longhorn'].chart_values }}"
- name: Ensure longhorn API availability
ansible.builtin.uri:
@@ -17,7 +17,7 @@
until:
- api_readycheck is not failed
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:

View File

@@ -0,0 +1,54 @@
- block:
- name: Create target namespace(s)
kubernetes.core.k8s:
name: "{{ item }}"
kind: Namespace
state: present
kubeconfig: "{{ kubeconfig.path }}"
loop:
# - argo-workflows
- firstboot
- name: Create ClusterRoleBinding for default serviceaccount
kubernetes.core.k8s:
state: present
kubeconfig: "{{ kubeconfig.path }}"
definition: |
kind: ClusterRoleBinding
metadata:
name: argo-workflows-firstboot-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: firstboot
- name: Install argo-workflows chart
kubernetes.core.helm:
name: argo-workflows
chart_ref: /opt/metacluster/helm-charts/argo-workflows
release_namespace: argo-workflows
create_namespace: true
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['argo-workflows'].chart_values }}"
- name: Ensure argo workflows API availability
ansible.builtin.uri:
url: https://workflow.{{ vapp['metacluster.fqdn'] }}/api/v1/version
method: GET
register: api_readycheck
until:
- api_readycheck.json.version is defined
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,25 @@
- name: Check for expected vApp properties
ansible.builtin.assert:
that:
- vapp[item] is defined
- (vapp[item] | length) > 0
quiet: true
loop:
- deployment.type
- guestinfo.dnsserver
- guestinfo.gateway
- guestinfo.hostname
- guestinfo.ipaddress
- guestinfo.prefixlength
- guestinfo.rootsshkey
- hv.fqdn
- hv.password
- hv.username
- ippool.endip
- ippool.startip
- metacluster.fqdn
- metacluster.password
- metacluster.token
- metacluster.vip
- workloadcluster.name
- workloadcluster.vip

View File

@@ -0,0 +1,40 @@
- name: Initialize tempfolder
ansible.builtin.tempfile:
state: directory
register: pinniped_kubeconfig
- name: Pull existing repository
ansible.builtin.git:
repo: https://git.{{ vapp['metacluster.fqdn'] }}/wl/ClusterAccess.Store.git
dest: "{{ pinniped_kubeconfig.path }}"
version: main
- name: Generate kubeconfig
ansible.builtin.shell:
cmd: pinniped get kubeconfig --kubeconfig {{ capi_kubeconfig.path }}
register: pinniped_config
until:
- pinniped_config is not failed
retries: "{{ playbook.retries }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
- name: Store kubeconfig in tempfile
ansible.builtin.copy:
dest: "{{ pinniped_kubeconfig.path }}/kubeconfig"
content: "{{ pinniped_config.stdout }}"
mode: 0600
no_log: true
- name: Push git repository
lvrfrc87.git_acp.git_acp:
path: "{{ pinniped_kubeconfig.path }}"
branch: main
comment: "Upload kubeconfig files"
add:
- .
url: https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/wl/ClusterAccess.Store.git
environment:
GIT_AUTHOR_NAME: administrator
GIT_AUTHOR_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}
GIT_COMMITTER_NAME: administrator
GIT_COMMITTER_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}

View File

@@ -1,4 +1,7 @@
- block:
# Below tasks circumvent usernames with `<domain>\<username>` format, which causes CAPV to
# incorrectly interpret the backslash (despite automatic escaping) as an escape sequence.
# `vcenter_session.user` will instead contain the username in `<username>@<domain>` format.
- name: Generate vCenter API token
ansible.builtin.uri:
@@ -13,7 +16,7 @@
url: https://{{ vapp['hv.fqdn'] }}/api/session
method: GET
headers:
vmware-api-session-id: "{{ vcenter_api_token.json }}"
vmware-api-session-id: "{{ vcenterapi_token.json }}"
register: vcenter_session
module_defaults:
@@ -44,45 +47,32 @@
resourcepool: "{{ vcenter_info.resourcepool }}"
folder: "{{ vcenter_info.folder }}"
cluster:
nodetemplate: "{{ (components.clusterapi.workload.node_template.url | basename | split('.'))[:-1] | join('.') }}"
nodetemplate: "{{ nodetemplate_inventorypath }}"
publickey: "{{ vapp['guestinfo.rootsshkey'] }}"
version: "{{ components.clusterapi.workload.version.k8s }}"
vip: "{{ vapp['workloadcluster.vip'] }}"
- name: Update image references to use local registry
ansible.builtin.replace:
dest: "{{ item.root + '/' + item.path }}"
regexp: '([ ]+image:[ "]+)(?!({{ _template.pattern }}|"{{ _template.pattern }}))'
replace: '\1{{ _template.pattern }}'
vars:
_template:
pattern: registry.{{ vapp['metacluster.fqdn'] }}/library/
loop: "{{ lookup('community.general.filetree', '/opt/metacluster/cluster-api') }}"
loop_control:
label: "{{ item.path }}"
when:
- item.path is search('.yaml')
- item.path is not search("clusterctl.yaml|metadata.yaml")
- name: Generate kustomization template
- name: Generate cluster-template kustomization manifest
ansible.builtin.template:
src: kustomization.cluster-template.j2
dest: /opt/metacluster/cluster-api/infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/kustomization.yaml
vars:
_template:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
network:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
dnsserver: "{{ vapp['guestinfo.dnsserver'] }}"
nodesize:
cpu: "{{ config.clusterapi.size_matrix[ vapp['workloadcluster.nodesize'] ].cpu }}"
memory: "{{ config.clusterapi.size_matrix[ vapp['workloadcluster.nodesize'] ].memory }}"
rootca: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] }}"
script:
# Base64 encoded; to avoid variable substitution when clusterctl parses the cluster-template.yml
encoded: IyEvYmluL2Jhc2gKdm10b29sc2QgLS1jbWQgJ2luZm8tZ2V0IGd1ZXN0aW5mby5vdmZFbnYnID4gL3RtcC9vdmZlbnYKCklQQWRkcmVzcz0kKHNlZCAtbiAncy8uKlByb3BlcnR5IG9lOmtleT0iZ3Vlc3RpbmZvLmludGVyZmFjZS4wLmlwLjAuYWRkcmVzcyIgb2U6dmFsdWU9IlwoW14iXSpcKS4qL1wxL3AnIC90bXAvb3ZmZW52KQpTdWJuZXRNYXNrPSQoc2VkIC1uICdzLy4qUHJvcGVydHkgb2U6a2V5PSJndWVzdGluZm8uaW50ZXJmYWNlLjAuaXAuMC5uZXRtYXNrIiBvZTp2YWx1ZT0iXChbXiJdKlwpLiovXDEvcCcgL3RtcC9vdmZlbnYpCkdhdGV3YXk9JChzZWQgLW4gJ3MvLipQcm9wZXJ0eSBvZTprZXk9Imd1ZXN0aW5mby5pbnRlcmZhY2UuMC5yb3V0ZS4wLmdhdGV3YXkiIG9lOnZhbHVlPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKRE5TPSQoc2VkIC1uICdzLy4qUHJvcGVydHkgb2U6a2V5PSJndWVzdGluZm8uZG5zLnNlcnZlcnMiIG9lOnZhbHVlPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKTUFDQWRkcmVzcz0kKHNlZCAtbiAncy8uKnZlOkFkYXB0ZXIgdmU6bWFjPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKCm1hc2syY2lkcigpIHsKICBjPTAKICB4PTAkKCBwcmludGYgJyVvJyAkezEvLy4vIH0gKQoKICB3aGlsZSBbICR4IC1ndCAwIF07IGRvCiAgICBsZXQgYys9JCgoeCUyKSkgJ3g+Pj0xJwogIGRvbmUKCiAgZWNobyAkYwp9CgpQcmVmaXg9JChtYXNrMmNpZHIgJFN1Ym5ldE1hc2spCgpjYXQgPiAvZXRjL25ldHBsYW4vMDEtbmV0Y2ZnLnlhbWwgPDxFT0YKbmV0d29yazoKICB2ZXJzaW9uOiAyCiAgcmVuZGVyZXI6IG5ldHdvcmtkCiAgZXRoZXJuZXRzOgogICAgaWQwOgogICAgICBzZXQtbmFtZTogZXRoMAogICAgICBtYXRjaDoKICAgICAgICBtYWNhZGRyZXNzOiAkTUFDQWRkcmVzcwogICAgICBhZGRyZXNzZXM6CiAgICAgICAgLSAkSVBBZGRyZXNzLyRQcmVmaXgKICAgICAgZ2F0ZXdheTQ6ICRHYXRld2F5CiAgICAgIG5hbWVzZXJ2ZXJzOgogICAgICAgIGFkZHJlc3NlcyA6IFskRE5TXQpFT0YKcm0gL2V0Yy9uZXRwbGFuLzUwKi55YW1sIC1mCgpzdWRvIG5ldHBsYW4gYXBwbHk=
runcmds:
- update-ca-certificates
- bash /root/network.sh
registries: "{{ source_registries }}"
- name: Store custom cluster-template
ansible.builtin.copy:
dest: /opt/metacluster/cluster-api/custom-cluster-template.yaml
content: "{{ lookup('kubernetes.core.kustomize', dir='/opt/metacluster/cluster-api/infrastructure-vsphere/' + components.clusterapi.management.version.infrastructure_vsphere ) }}"
content: "{{ lookup('kubernetes.core.kustomize', dir='/opt/metacluster/cluster-api/infrastructure-vsphere/' ~ components.clusterapi.management.version.infrastructure_vsphere ) }}"
- name: Initialize Cluster API management cluster
ansible.builtin.shell:
@@ -95,7 +85,41 @@
--kubeconfig {{ kubeconfig.path }}
chdir: /opt/metacluster/cluster-api
- name: Ensure CAPI/CAPV controller availability
- name: Initialize tempfolder
ansible.builtin.tempfile:
state: directory
register: capi_clustermanifest
- name: Pull existing repository
ansible.builtin.git:
repo: https://git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.ClusterAPI.git
dest: "{{ capi_clustermanifest.path }}"
version: main
- name: Generate Cluster API provider manifests
ansible.builtin.shell:
cmd: >-
clusterctl generate provider \
-v5 \
--{{ item.type }} {{ item.name }}:{{ item.version }} \
--config ./clusterctl.yaml > {{ capi_clustermanifest.path }}/provider-{{ item.name }}.yaml
chdir: /opt/metacluster/cluster-api
loop:
- type: infrastructure
name: vsphere
version: "{{ components.clusterapi.management.version.infrastructure_vsphere }}"
- type: ipam
name: in-cluster
version: "{{ components.clusterapi.management.version.ipam_incluster }}"
- name: Split cluster API provider manifests into separate files
ansible.builtin.shell:
cmd: >-
awk 'BEGINFILE {print "---"}{print}' {{ capi_clustermanifest.path }}/provider-*.yaml |
kubectl slice \
-o {{ capi_clustermanifest.path }}/providers
- name: Ensure controller availability
kubernetes.core.k8s_info:
kind: Deployment
name: "{{ item.name }}"
@@ -103,6 +127,8 @@
wait: true
kubeconfig: "{{ kubeconfig.path }}"
loop:
- name: capi-ipam-in-cluster-controller-manager
namespace: capi-ipam-in-cluster-system
- name: capi-controller-manager
namespace: capi-system
- name: capv-controller-manager
@@ -115,7 +141,8 @@
clustersize: >-
{{ {
'controlplane': vapp['deployment.type'] | regex_findall('^cp(\d)+') | first,
'workers': vapp['deployment.type'] | regex_findall('w(\d)+$') | first
'worker': vapp['deployment.type'] | regex_findall('w(\d)+') | first,
'workerstorage': vapp['deployment.type'] | regex_findall('ws(\d)+$') | first
} }}
- name: Generate workload cluster manifest
@@ -124,25 +151,136 @@
clusterctl generate cluster \
{{ vapp['workloadcluster.name'] | lower }} \
--control-plane-machine-count {{ clustersize.controlplane }} \
--worker-machine-count {{ clustersize.workers }} \
--worker-machine-count {{ clustersize.worker }} \
--from ./custom-cluster-template.yaml \
--config ./clusterctl.yaml \
--kubeconfig {{ kubeconfig.path }}
chdir: /opt/metacluster/cluster-api
register: clusterctl_newcluster
# TODO: move to git repo
- name: Save workload cluster manifest
ansible.builtin.copy:
dest: /opt/metacluster/cluster-api/new-cluster.yaml
dest: "{{ capi_clustermanifest.path }}/new-cluster.yaml"
content: "{{ clusterctl_newcluster.stdout }}"
- name: Apply workload cluster manifest
kubernetes.core.k8s:
definition: >-
{{ clusterctl_newcluster.stdout }}
wait: yes
kubeconfig: "{{ kubeconfig.path }}"
# TODO: move to git repo
- name: Split workload cluster manifest into separate files
ansible.builtin.shell:
cmd: >-
kubectl slice \
-f {{ capi_clustermanifest.path }}/new-cluster.yaml \
-o {{ capi_clustermanifest.path }}/downstream-cluster
- name: Generate nodepool kustomization manifest
ansible.builtin.template:
src: kustomization.longhorn-storage.j2
dest: "{{ capi_clustermanifest.path }}/kustomization.yaml"
vars:
_template:
cluster:
name: "{{ vapp['workloadcluster.name'] }}"
nodepool:
size: "{{ clustersize.workerstorage }}"
additionaldisk: "{{ vapp['workloadcluster.additionaldisk'] }}"
- name: Store nodepool manifest
ansible.builtin.copy:
dest: "{{ capi_clustermanifest.path }}/nodepool-worker-storage.yaml"
content: "{{ lookup('kubernetes.core.kustomize', dir=capi_clustermanifest.path) }}"
- name: Split nodepool manifest into separate files
ansible.builtin.shell:
cmd: >-
kubectl slice \
-f {{ capi_clustermanifest.path }}/nodepool-worker-storage.yaml \
-o {{ capi_clustermanifest.path }}/downstream-cluster
- name: Create in-cluster IpPool
ansible.builtin.template:
src: ippool.j2
dest: "{{ capi_clustermanifest.path }}/downstream-cluster/inclusterippool-{{ _template.cluster.name }}.yml"
vars:
_template:
cluster:
name: "{{ vapp['workloadcluster.name'] | lower }}"
namespace: default
network:
startip: "{{ vapp['ippool.startip'] }}"
endip: "{{ vapp['ippool.endip'] }}"
prefix: "{{ vapp['guestinfo.prefixlength'] }}"
gateway: "{{ vapp['guestinfo.gateway'] }}"
- name: Push git repository
lvrfrc87.git_acp.git_acp:
path: "{{ capi_clustermanifest.path }}"
branch: main
comment: "Upload manifests"
add:
- ./downstream-cluster
- ./providers
clean: untracked
url: https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.ClusterAPI.git
environment:
GIT_AUTHOR_NAME: administrator
GIT_AUTHOR_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}
GIT_COMMITTER_NAME: administrator
GIT_COMMITTER_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}
# - name: Cleanup tempfolder
# ansible.builtin.file:
# path: "{{ capi_clustermanifest.path }}"
# state: absent
# when: capi_clustermanifest.path is defined
- name: Configure Cluster API repository
ansible.builtin.template:
src: gitrepo.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: gitrepo-mc-gitopsclusterapi
namespace: argo-cd
url: https://git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.ClusterAPI.git
notify:
- Apply manifests
- name: WORKAROUND - Wait for ingress ACME requests to complete
ansible.builtin.shell:
cmd: >-
openssl s_client -connect registry.{{ vapp['metacluster.fqdn'] }}:443 -servername registry.{{ vapp['metacluster.fqdn'] }} 2>/dev/null </dev/null | \
openssl x509 -noout -subject | \
grep 'subject=CN = registry.{{ vapp['metacluster.fqdn'] }}'
register: certificate_subject
until: certificate_subject is not failed
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"
- name: Create application
ansible.builtin.template:
src: application.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.application.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
application:
name: application-clusterapi-workloadcluster
namespace: argo-cd
cluster:
name: https://kubernetes.default.svc
namespace: default
repository:
url: https://git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.ClusterAPI.git
path: downstream-cluster
revision: main
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers
- name: Wait for cluster to be available
ansible.builtin.shell:
@@ -153,7 +291,7 @@
register: cluster_readycheck
until: cluster_readycheck is succeeded
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
- name: Initialize tempfile
ansible.builtin.tempfile:
@@ -178,8 +316,13 @@
# TODO: move to git repo
- name: Apply cni plugin manifest
kubernetes.core.k8s:
src: /opt/metacluster/cluster-api/cni-calico/{{ components.clusterapi.workload.version.calico }}/calico.yaml
definition: |
{{
lookup('ansible.builtin.file', '/opt/metacluster/cluster-api/cni-calico/' ~ components.clusterapi.workload.version.calico ~ '/calico.yaml') |
regex_replace('# - name: CALICO_IPV4POOL_CIDR', '- name: CALICO_IPV4POOL_CIDR') |
regex_replace('# value: "192.168.0.0/16"', ' value: "172.30.0.0/16"')
}}
state: present
wait: yes
wait: true
kubeconfig: "{{ capi_kubeconfig.path }}"
# TODO: move to git repo

View File

@@ -1,44 +1,132 @@
- block:
- name: Aggregate helm charts from filesystem
ansible.builtin.find:
path: /opt/workloadcluster/helm-charts
file_type: directory
recurse: false
register: helm_charts
- name: Generate service account in workload cluster
kubernetes.core.k8s:
template: serviceaccount.j2
state: present
- name: Pull existing repository
ansible.builtin.git:
repo: https://git.{{ vapp['metacluster.fqdn'] }}/wl/GitOps.Config.git
dest: /opt/workloadcluster/git-repositories/gitops
version: main
- name: Retrieve service account bearer token
kubernetes.core.k8s_info:
kind: ServiceAccount
name: "{{ _template.account.name }}"
namespace: "{{ _template.account.namespace }}"
register: workloadcluster_serviceaccount
- name: Create folder structure within new git-repository
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/workloadcluster/git-repositories/gitops/charts
- /opt/workloadcluster/git-repositories/gitops/values
- name: Retrieve service account bearer token
kubernetes.core.k8s_info:
kind: Secret
name: "{{ workloadcluster_serviceaccount.resources | json_query('[].secrets[].name') | first }}"
namespace: "{{ _template.account.namespace }}"
register: workloadcluster_bearertoken
- name: Create hard-links to populate new git-repository
ansible.builtin.shell:
cmd: >-
cp -lr {{ item.path }}/ /opt/workloadcluster/git-repositories/gitops/charts
loop: "{{ helm_charts.files }}"
loop_control:
label: "{{ item.path | basename }}"
- name: Register workload cluster in argo-cd
kubernetes.core.k8s:
template: cluster.j2
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
cluster:
name: "{{ vapp['workloadcluster.name'] | lower }}"
secret: argocd-cluster-{{ vapp['workloadcluster.name'] | lower }}
url: https://{{ vapp['workloadcluster.vip'] }}:6443
token: "{{ workloadcluster_bearertoken.resources | json_query('[].data.token') }}"
- name: Write custom manifests to respective chart templates store
ansible.builtin.template:
src: "{{ src }}"
dest: /opt/workloadcluster/git-repositories/gitops/charts/{{ manifest.value.namespace }}/{{ manifest.key }}/templates/{{ (src | split('.'))[0] ~ '-' ~ _template.name ~ '.yaml' }}
vars:
manifest: "{{ item.0 }}"
src: "{{ item.1.src }}"
_template: "{{ item.1._template }}"
loop: "{{ query('ansible.builtin.subelements', query('ansible.builtin.dict', downstream_components), 'value.extra_manifests') }}"
loop_control:
label: "{{ (src | split('.'))[0] ~ '-' ~ _template.name }}"
- name: Create subfolders
ansible.builtin.file:
path: /opt/workloadcluster/git-repositories/gitops/values/{{ item.key }}
state: directory
loop: "{{ query('ansible.builtin.dict', downstream_components) }}"
loop_control:
label: "{{ item.key }}"
- name: Write chart values to file
ansible.builtin.copy:
dest: /opt/workloadcluster/git-repositories/gitops/values/{{ item.key }}/values.yaml
content: "{{ item.value.chart_values | default('# Empty') | to_nice_yaml(indent=2, width=4096) }}"
loop: "{{ query('ansible.builtin.dict', downstream_components) }}"
loop_control:
label: "{{ item.key }}"
- name: Push git repository
lvrfrc87.git_acp.git_acp:
path: /opt/workloadcluster/git-repositories/gitops
branch: main
comment: "Upload charts"
add:
- .
url: https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/wl/GitOps.Config.git
environment:
GIT_AUTHOR_NAME: administrator
GIT_AUTHOR_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}
GIT_COMMITTER_NAME: administrator
GIT_COMMITTER_EMAIL: administrator@{{ vapp['metacluster.fqdn'] }}
- name: Retrieve workload-cluster kubeconfig
kubernetes.core.k8s_info:
kind: Secret
name: "{{ vapp['workloadcluster.name'] }}-kubeconfig"
namespace: default
kubeconfig: "{{ kubeconfig.path }}"
register: secret_workloadcluster_kubeconfig
- name: Register workload-cluster in argo-cd
kubernetes.core.k8s:
template: cluster.j2
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
account:
name: argocd-sa
namespace: default
clusterrolebinding:
name: argocd-crb
module_defaults:
group/k8s:
kubeconfig: "{{ capi_kubeconfig.path }}"
cluster:
name: "{{ vapp['workloadcluster.name'] | lower }}"
secret: argocd-cluster-{{ vapp['workloadcluster.name'] | lower }}
url: https://{{ vapp['workloadcluster.vip'] }}:6443
kubeconfig:
ca: "{{ (secret_workloadcluster_kubeconfig.resources[0].data.value | b64decode | from_yaml).clusters[0].cluster['certificate-authority-data'] }}"
certificate: "{{ (secret_workloadcluster_kubeconfig.resources[0].data.value | b64decode | from_yaml).users[0].user['client-certificate-data'] }}"
key: "{{ (secret_workloadcluster_kubeconfig.resources[0].data.value | b64decode | from_yaml).users[0].user['client-key-data'] }}"
- name: Configure workload-cluster GitOps repository
ansible.builtin.template:
src: gitrepo.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: gitrepo-wl-gitopsconfig
namespace: argo-cd
url: https://git.{{ vapp['metacluster.fqdn'] }}/wl/GitOps.Config.git
notify:
- Apply manifests
- name: Create applicationset
ansible.builtin.template:
src: applicationset.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.application.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
application:
name: applicationset-workloadcluster
namespace: argo-cd
cluster:
url: https://{{ vapp['workloadcluster.vip'] }}:6443
repository:
url: https://git.{{ vapp['metacluster.fqdn'] }}/wl/GitOps.Config.git
revision: main
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers

View File

@@ -1,5 +1,12 @@
- import_tasks: hypervisor.yml
- import_tasks: registry.yml
- import_tasks: nodetemplates.yml
- import_tasks: clusterapi.yml
- import_tasks: gitops.yml
- block:
- import_tasks: clusterapi.yml
- import_tasks: gitops.yml
- import_tasks: authentication.yml
when:
- vapp['deployment.type'] != 'core'

View File

@@ -1,85 +0,0 @@
- block:
- name: Check for existing templates on hypervisor
community.vmware.vmware_guest_info:
name: "{{ (item | basename | split('.'))[:-1] | join('.') }}"
register: existing_ova
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
ignore_errors: yes
- name: Parse OVA files for network mappings
ansible.builtin.shell:
cmd: govc import.spec -json {{ item }}
environment:
GOVC_INSECURE: '1'
GOVC_URL: "{{ vapp['hv.fqdn'] }}"
GOVC_USERNAME: "{{ vapp['hv.username'] }}"
GOVC_PASSWORD: "{{ vapp['hv.password'] }}"
register: ova_spec
when: existing_ova.results[index] is failed
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
loop_control:
index_var: index
- name: Deploy OVA templates on hypervisor
community.vmware.vmware_deploy_ovf:
cluster: "{{ vcenter_info.cluster }}"
datastore: "{{ vcenter_info.datastore }}"
folder: "{{ vcenter_info.folder }}"
name: "{{ (item | basename | split('.'))[:-1] | join('.') }}"
networks: "{u'{{ ova_spec.results[index].stdout | from_json | json_query('NetworkMapping[0].Name') }}':u'{{ vcenter_info.network }}'}"
allow_duplicates: no
power_on: false
ovf: "{{ item }}"
register: ova_deploy
when: existing_ova.results[index] is failed
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
loop_control:
index_var: index
- name: Add vApp properties on deployed VM's
ansible.builtin.shell:
cmd: >-
npp-prepper \
--server "{{ vapp['hv.fqdn'] }}" \
--username "{{ vapp['hv.username'] }}" \
--password "{{ vapp['hv.password'] }}" \
vm \
--datacenter "{{ vcenter_info.datacenter }}" \
--portgroup "{{ vcenter_info.network }}" \
--name "{{ item.instance.hw_name }}"
when: existing_ova.results[index] is failed
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
- name: Create snapshot on deployed VM's
community.vmware.vmware_guest_snapshot:
folder: "{{ vcenter_info.folder }}"
name: "{{ item.instance.hw_name }}"
state: present
snapshot_name: "{{ ansible_date_time.iso8601_basic_short }}-base"
when: ova_deploy.results[index] is not skipped
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
- name: Mark deployed VM's as templates
community.vmware.vmware_guest:
name: "{{ item.instance.hw_name }}"
is_template: yes
when: ova_deploy.results[index] is not skipped
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
module_defaults:
group/vmware:
hostname: "{{ vapp['hv.fqdn'] }}"
validate_certs: no
username: "{{ vapp['hv.username'] }}"
password: "{{ vapp['hv.password'] }}"
datacenter: "{{ vcenter_info.datacenter }}"

View File

@@ -5,7 +5,7 @@
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/projects
method: POST
headers:
Authorization: "Basic {{ ('admin:' + vapp['metacluster.password']) | b64encode }}"
Authorization: "Basic {{ ('admin:' ~ vapp['metacluster.password']) | b64encode }}"
body:
project_name: kubeadm
public: true
@@ -28,7 +28,7 @@
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/projects/kubeadm/repositories/{{ ( item | regex_findall('([^:/]+)') )[-2] }}/artifacts?from=library/{{ item | replace('/', '%2F') | replace(':', '%3A') }}
method: POST
headers:
Authorization: "Basic {{ ('admin:' + vapp['metacluster.password']) | b64encode }}"
Authorization: "Basic {{ ('admin:' ~ vapp['metacluster.password']) | b64encode }}"
body:
from: "{{ item }}"
loop: "{{ kubeadm_images }}"

View File

@@ -0,0 +1,16 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ _template.application.name }}
namespace: {{ _template.application.namespace }}
spec:
destination:
namespace: {{ _template.cluster.namespace }}
server: {{ _template.cluster.name }}
project: default
source:
repoURL: {{ _template.repository.url }}
path: {{ _template.repository.path }}
targetRevision: {{ _template.repository.revision }}
syncPolicy:
automated: {}

View File

@@ -1,28 +1,33 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
name: {{ _template.application.name }}
namespace: {{ _template.application.namespace }}
spec:
generators:
- git:
repoURL: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
revision: HEAD
repoURL: {{ _template.repository.url }}
revision: {{ _template.repository.revision }}
directories:
- path: metacluster-applicationset/*
- path: charts/*/*
template:
metadata:
name: {% raw %}'{{ path.basename }}'{% endraw +%}
name: application-{% raw %}{{ path.basename }}{% endraw +%}
spec:
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
source:
repoURL: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
targetRevision: HEAD
syncOptions:
- CreateNamespace=true
sources:
- repoURL: {{ _template.repository.url }}
targetRevision: {{ _template.repository.revision }}
path: {% raw %}'{{ path }}'{% endraw +%}
helm:
valueFiles:
- /values/{% raw %}{{ path.basename }}{% endraw %}/values.yaml
destination:
server: https://kubernetes.default.svc
namespace: default
server: {{ _template.cluster.url }}
namespace: {% raw %}'{{ path[1] }}'{% endraw +%}

View File

@@ -11,8 +11,10 @@ stringData:
server: {{ _template.cluster.url }}
config: |
{
"bearerToken": "{{ _template.cluster.token }}",
"tlsClientConfig": {
"insecure": true
"insecure": false,
"caData": "{{ _template.kubeconfig.ca }}",
"certData": "{{ _template.kubeconfig.certificate }}",
"keyData": "{{ _template.kubeconfig.key }}"
}
}

View File

@@ -0,0 +1,7 @@
apiVersion: config.supervisor.pinniped.dev/v1alpha1
kind: FederationDomain
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.spec }}

View File

@@ -1,13 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ _template.name }}-{{ _template.uid }}
name: {{ _template.name }}
namespace: {{ _template.namespace }}
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
name: {{ _template.name }}
insecure: 'true'
sshPrivateKey: |
{{ _template.privatekey }}
url: {{ _template.url }}

View File

@@ -4,4 +4,4 @@ metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.config }}
{{ _template.spec }}

View File

@@ -4,4 +4,4 @@ metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.config }}
{{ _template.spec }}

View File

@@ -0,0 +1,10 @@
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: inclusterippool-{{ _template.cluster.name }}
namespace: {{ _template.cluster.namespace }}
spec:
addresses:
- {{ _template.cluster.network.startip }}-{{ _template.cluster.network.endip }}
prefix: {{ _template.cluster.network.prefix }}
gateway: {{ _template.cluster.network.gateway }}

View File

@@ -0,0 +1,6 @@
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: JWTAuthenticator
metadata:
name: {{ _template.name }}
spec:
{{ _template.spec }}

View File

@@ -3,8 +3,8 @@ kind: Kustomization
resources:
- cluster-template.yaml
patchesStrategicMerge:
- |-
patches:
- patch: |-
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
@@ -13,8 +13,8 @@ patchesStrategicMerge:
spec:
kubeadmConfigSpec:
clusterConfiguration:
imageRepository: registry.{{ _template.fqdn }}/kubeadm
- |-
imageRepository: registry.{{ _template.network.fqdn }}/kubeadm
- patch: |-
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
@@ -24,8 +24,8 @@ patchesStrategicMerge:
template:
spec:
clusterConfiguration:
imageRepository: registry.{{ _template.fqdn }}/kubeadm
- |-
imageRepository: registry.{{ _template.network.fqdn }}/kubeadm
- patch: |-
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
@@ -35,12 +35,21 @@ patchesStrategicMerge:
template:
spec:
files:
- encoding: base64
content: |
{{ _template.script.encoded }}
permissions: '0744'
- content: |
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
append: true
path: /etc/containerd/config.toml
{% for registry in _template.registries %}
- content: |
server = "https://{{ registry }}"
[host."https://registry.{{ _template.network.fqdn }}/v2/library/{{ registry }}"]
capabilities = ["pull", "resolve"]
override_path = true
owner: root:root
path: /root/network.sh
path: /etc/containerd/certs.d/{{ registry }}/hosts.toml
{% endfor %}
- content: |
network: {config: disabled}
owner: root:root
@@ -49,56 +58,209 @@ patchesStrategicMerge:
{{ _template.rootca | indent(width=14, first=False) | trim }}
owner: root:root
path: /usr/local/share/ca-certificates/root_ca.crt
- patch: |-
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: ${CLUSTER_NAME}
namespace: '${NAMESPACE}'
spec:
template:
spec:
diskGiB: 60
network:
devices:
- dhcp4: false
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: inclusterippool-${CLUSTER_NAME}
nameservers:
- {{ _template.network.dnsserver }}
networkName: '${VSPHERE_NETWORK}'
- patch: |-
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: ${CLUSTER_NAME}-worker
namespace: '${NAMESPACE}'
spec:
template:
spec:
diskGiB: 60
network:
devices:
- dhcp4: false
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: inclusterippool-${CLUSTER_NAME}
nameservers:
- {{ _template.network.dnsserver }}
networkName: '${VSPHERE_NETWORK}'
patchesJson6902:
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
encoding: base64
content: |
{{ _template.script.encoded }}
owner: root:root
path: /root/network.sh
permissions: '0744'
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
network: {config: disabled}
owner: root:root
path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
{{ _template.rootca | indent(width=12, first=False) | trim }}
owner: root:root
path: /usr/local/share/ca-certificates/root_ca.crt
- target:
group: bootstrap.cluster.x-k8s.io
version: v1beta1
kind: KubeadmConfigTemplate
name: .*
patch: |-
{% for cmd in _template.runcmds %}
- op: add
path: /spec/template/spec/preKubeadmCommands/-
value: {{ cmd }}
- target:
group: addons.cluster.x-k8s.io
version: v1beta1
kind: ClusterResourceSet
name: \${CLUSTER_NAME}-crs-0
patch: |-
- op: replace
path: /spec/resources
value:
- kind: Secret
name: cloud-controller-manager
- kind: Secret
name: cloud-provider-vsphere-credentials
- kind: ConfigMap
name: cpi-manifests
- op: add
path: /spec/strategy
value: Reconcile
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
append: true
path: /etc/containerd/config.toml
{% for registry in _template.registries %}
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
server = "https://{{ registry }}"
[host."https://registry.{{ _template.network.fqdn }}/v2/library/{{ registry }}"]
capabilities = ["pull", "resolve"]
override_path = true
owner: root:root
path: /etc/containerd/certs.d/{{ registry }}/hosts.toml
{% endfor %}
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
network: {config: disabled}
owner: root:root
path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
{{ _template.rootca | indent(width=10, first=False) | trim }}
owner: root:root
path: /usr/local/share/ca-certificates/root_ca.crt
- target:
group: bootstrap.cluster.x-k8s.io
version: v1beta1
kind: KubeadmConfigTemplate
name: .*
patch: |-
{% for cmd in _template.runcmds %}
- op: add
path: /spec/kubeadmConfigSpec/preKubeadmCommands/-
value: {{ cmd }}
- op: add
path: /spec/template/spec/preKubeadmCommands/-
value: {{ cmd }}
{% endfor %}
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
{% for cmd in _template.runcmds %}
- op: add
path: /spec/kubeadmConfigSpec/preKubeadmCommands/-
value: {{ cmd }}
{% endfor %}
- target:
group: infrastructure.cluster.x-k8s.io
version: v1beta1
kind: VSphereMachineTemplate
name: \${CLUSTER_NAME}
patch: |-
- op: replace
path: /metadata/name
value: ${CLUSTER_NAME}-master
- op: remove
path: /spec/template/spec/thumbprint
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: \${CLUSTER_NAME}
patch: |-
- op: replace
path: /metadata/name
value: ${CLUSTER_NAME}-master
- op: replace
path: /spec/machineTemplate/infrastructureRef/name
value: ${CLUSTER_NAME}-master
- target:
group: cluster.x-k8s.io
version: v1beta1
kind: Cluster
name: \${CLUSTER_NAME}
patch: |-
- op: replace
path: /spec/clusterNetwork/pods
value:
cidrBlocks:
- 172.30.0.0/16
- op: replace
path: /spec/controlPlaneRef/name
value: ${CLUSTER_NAME}-master
- target:
group: infrastructure.cluster.x-k8s.io
version: v1beta1
kind: VSphereMachineTemplate
name: \${CLUSTER_NAME}-worker
patch: |-
- op: replace
path: /spec/template/spec/numCPUs
value: {{ _template.nodesize.cpu }}
- op: replace
path: /spec/template/spec/memoryMiB
value: {{ _template.nodesize.memory }}
- op: remove
path: /spec/template/spec/thumbprint
- target:
group: cluster.x-k8s.io
version: v1beta1
kind: MachineDeployment
name: \${CLUSTER_NAME}-md-0
patch: |-
- op: replace
path: /metadata/name
value: ${CLUSTER_NAME}-worker
- op: replace
path: /spec/template/spec/bootstrap/configRef/name
value: ${CLUSTER_NAME}-worker
- target:
group: bootstrap.cluster.x-k8s.io
version: v1beta1
kind: KubeadmConfigTemplate
name: \${CLUSTER_NAME}-md-0
patch: |-
- op: replace
path: /metadata/name
value: ${CLUSTER_NAME}-worker
- target:
group: infrastructure.cluster.x-k8s.io
version: v1beta1
kind: VSphereCluster
name: .*
patch: |-
- op: remove
path: /spec/thumbprint

View File

@@ -0,0 +1,83 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- downstream-cluster/kubeadmconfigtemplate-{{ _template.cluster.name }}-worker.yaml
- downstream-cluster/machinedeployment-{{ _template.cluster.name }}-worker.yaml
- downstream-cluster/vspheremachinetemplate-{{ _template.cluster.name }}-worker.yaml
patches:
- patch: |-
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: {{ _template.cluster.name }}-worker
namespace: default
spec:
template:
spec:
diskSetup:
filesystems:
- device: /dev/sdb1
filesystem: ext4
label: blockstorage
partitions:
- device: /dev/sdb
layout: true
tableType: gpt
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
node-labels: "node.longhorn.io/create-default-disk=true"
mounts:
- - LABEL=blockstorage
- /mnt/blockstorage
- patch: |-
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: {{ _template.cluster.name }}-worker
namespace: default
spec:
template:
spec:
additionalDisksGiB:
- {{ _template.nodepool.additionaldisk }}
- target:
group: bootstrap.cluster.x-k8s.io
version: v1beta1
kind: KubeadmConfigTemplate
name: {{ _template.cluster.name }}-worker
patch: |-
- op: replace
path: /metadata/name
value: {{ _template.cluster.name }}-worker-storage
- target:
group: cluster.x-k8s.io
version: v1beta1
kind: MachineDeployment
name: {{ _template.cluster.name }}-worker
patch: |-
- op: replace
path: /metadata/name
value: {{ _template.cluster.name }}-worker-storage
- op: replace
path: /spec/template/spec/bootstrap/configRef/name
value: {{ _template.cluster.name }}-worker-storage
- op: replace
path: /spec/template/spec/infrastructureRef/name
value: {{ _template.cluster.name }}-worker-storage
- op: replace
path: /spec/replicas
value: {{ _template.nodepool.size }}
- target:
group: infrastructure.cluster.x-k8s.io
version: v1beta1
kind: VSphereMachineTemplate
name: {{ _template.cluster.name }}-worker
patch: |-
- op: replace
path: /metadata/name
value: {{ _template.cluster.name }}-worker-storage

View File

@@ -0,0 +1,7 @@
apiVersion: idp.supervisor.pinniped.dev/v1alpha1
kind: OIDCIdentityProvider
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.spec }}

View File

@@ -3,6 +3,7 @@ kind: Secret
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
type: {{ _template.type }}
data:
{% for kv_pair in _template.data %}
"{{ kv_pair.key }}": {{ kv_pair.value }}

View File

@@ -0,0 +1,7 @@
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.spec }}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ _template.account.name }}
namespace: {{ _template.account.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ _template.clusterrolebinding.name }}
subjects:
- kind: ServiceAccount
name: {{ _template.account.name }}
namespace: {{ _template.account.namespace }}
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,12 +1,6 @@
- import_tasks: service.yml
- import_tasks: cron.yml
- name: Cleanup tempfile
ansible.builtin.file:
path: "{{ kubeconfig.path }}"
state: absent
when: kubeconfig.path is defined
# - name: Reboot host
# ansible.builtin.shell:
# cmd: systemctl reboot

View File

@@ -11,11 +11,27 @@
lv: longhorn_lv
size: 100%VG
- name: Store begin timestamp
ansible.builtin.set_fact:
start_time: "{{ lookup('pipe', 'date +%s') }}"
- name: Create filesystem
community.general.filesystem:
dev: /dev/mapper/longhorn_vg-longhorn_lv
fstype: ext4
- name: Store end timestamp
ansible.builtin.set_fact:
end_time: "{{ lookup('pipe', 'date +%s') }}"
- name: Calculate crude storage benchmark
ansible.builtin.set_fact:
storage_benchmark: "{{ [storage_benchmark, (end_time | int - start_time | int)] | max }}"
- name: Log benchmark actual duration
ansible.builtin.debug:
msg: "Benchmark actual duration: {{ (end_time | int - start_time | int) }} second(s)"
- name: Mount dynamic disk
ansible.posix.mount:
path: /mnt/blockstorage

View File

@@ -8,5 +8,5 @@
label: "{{ item | basename }}"
# Probably should add a task before that ensures K3s node is fully initialized before starting imports; currently K3s goes away briefly during this loop
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
until: import_result is not failed

View File

@@ -1 +1,2 @@
- import_tasks: vapp.yml
- import_tasks: vcenter.yml

View File

@@ -5,7 +5,7 @@
schema: vsphere
register: vcenter_info
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
until: vcenter_info is not failed
module_defaults:

View File

@@ -19,6 +19,25 @@
executable: /opt/firstboot/tty.sh
workingdir: /tmp/
metacluster:
components:
- name: ArgoCD
url: https://gitops.${FQDN}
healthcheck: https://gitops.${FQDN}
- name: Gitea
url: https://git.${FQDN}
healthcheck: https://git.${FQDN}
- name: Harbor
url: https://registry.${FQDN}
healthcheck: https://registry.${FQDN}
- name: Longhorn
url: https://storage.${FQDN}
healthcheck: https://storage.${FQDN}
- name: StepCA
url: ''
healthcheck: https://ca.${FQDN}/health
- name: Traefik
url: https://ingress.${FQDN}
healthcheck: https://ingress.${FQDN}
fqdn: "{{ vapp['metacluster.fqdn'] }}"
vip: "{{ vapp['metacluster.vip'] }}"
loop:

View File

@@ -25,7 +25,7 @@
line: 'PasswordAuthentication yes'
state: absent
loop_control:
label: "{{ '[' + item.regex + '] ' + item.state }}"
label: "{{ '[' ~ item.regex ~ '] ' ~ item.state }}"
- name: Create dedicated SSH keypair
community.crypto.openssh_keypair:

View File

@@ -11,7 +11,7 @@
- attribute: cluster
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "runtime").Val.Host | .Type + ":" + .Value')
jq -r '.[] | select(.name == "runtime").val.host | .type + ":" + .value')
part: (NF-1)
- attribute: datacenter
moref: VirtualMachine:{{ moref_id }}
@@ -19,27 +19,27 @@
- attribute: datastore
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "datastore").Val.ManagedObjectReference | .[].Type + ":" + .[].Value')
jq -r '.[] | select(.name == "datastore").val._value | .[].type + ":" + .[].value')
part: NF
- attribute: folder
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "parent").Val | .Type + ":" + .Value')
jq -r '.[] | select(.name == "parent").val | .type + ":" + .value')
part: 0
# - attribute: host
# moref: >-
# $(govc object.collect -json VirtualMachine:{{ moref_id }} | \
# jq -r '.[] | select(.Name == "runtime").Val.Host | .Type + ":" + .Value')
# jq -r '.[] | select(.name == "runtime").val.host | .type + ":" + .value')
# part: NF
- attribute: network
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "network").Val.ManagedObjectReference | .[].Type + ":" + .[].Value')
jq -r '.[] | select(.name == "network").val._value | .[].type + ":" + .[].value')
part: NF
- attribute: resourcepool
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "resourcePool").Val | .Type + ":" + .Value')
jq -r '.[] | select(.name == "resourcePool").val | .type + ":" + .value')
part: 0
loop_control:
label: "{{ item.attribute }}"
@@ -55,21 +55,3 @@
loop: "{{ govc_inventory.results }}"
loop_control:
label: "{{ item.item.attribute }}"
- name: Configure network protocol profile on hypervisor
ansible.builtin.shell:
cmd: >-
npp-prepper \
--server "{{ vapp['hv.fqdn'] }}" \
--username "{{ vapp['hv.username'] }}" \
--password "{{ vapp['hv.password'] }}" \
dc \
--name "{{ vcenter_info.datacenter }}" \
--portgroup "{{ vcenter_info.network }}" \
--startaddress {{ vapp['ippool.startip'] }} \
--endaddress {{ vapp['ippool.endip'] }} \
--netmask {{ (vapp['guestinfo.ipaddress'] + '/' + vapp['guestinfo.prefixlength']) | ansible.utils.ipaddr('netmask') }} \
{{ vapp['guestinfo.dnsserver'] | split(',') | map('trim') | map('regex_replace', '^', '--dnsserver ') | join(' ') }} \
--dnsdomain {{ vapp['metacluster.fqdn'] }} \
--gateway {{ vapp['guestinfo.gateway'] }} \
--force

View File

@@ -0,0 +1,33 @@
- block:
- name: Check for existing template
community.vmware.vmware_guest_info:
name: "{{ vapp['workloadcluster.nodetemplate'] }}"
hostname: "{{ vapp['hv.fqdn'] }}"
validate_certs: false
username: "{{ vapp['hv.username'] }}"
password: "{{ vapp['hv.password'] }}"
datacenter: "{{ vcenter_info.datacenter }}"
folder: "{{ vcenter_info.folder }}"
register: nodetemplate
until:
- nodetemplate is not failed
retries: 600
delay: 30
#wait for 5 hr.
vars:
color_reset: "\e[0m"
ansible_callback_diy_runner_retry_msg: >-
{%- set result = ansible_callback_diy.result.output -%}
{%- set retries_left = result.retries - result.attempts -%}
TEMPLATE '{{ vapp['workloadcluster.nodetemplate'] }}' NOT FOUND; PLEASE UPLOAD MANUALLY -- ({{ retries_left }} retries left)
ansible_callback_diy_runner_retry_msg_color: bright yellow
- name: Store inventory path of existing template
ansible.builtin.set_fact:
nodetemplate_inventorypath: "{{ nodetemplate.instance.hw_folder ~ '/' ~ nodetemplate.instance.hw_name }}"
rescue:
- name: CRITICAL ERROR
ansible.builtin.fail:
msg: Required node-template is not available; cannot continue

View File

@@ -1,8 +1,8 @@
mirrors:
{% for entry in _template.data %}
{{ entry }}:
{% for registry in _template.registries %}
{{ registry }}:
endpoint:
- https://registry.{{ _template.hv.fqdn }}
rewrite:
"(.*)": "library/{{ entry }}/$1"
"(.*)": "library/{{ registry }}/$1"
{% endfor %}

View File

@@ -12,12 +12,15 @@ DFLT='\033[0m' # Reset colour
LCLR='\033[K' # Clear to end of line
PRST='\033[0;0H' # Reset cursor position
# COMPONENTS=('ca' 'ingress' 'storage' 'registry' 'git' 'gitops')
COMPONENTS=('storage' 'registry' 'git' 'gitops')
FQDN='{{ _template.metacluster.fqdn }}'
IPADDRESS='{{ _template.metacluster.vip }}'
I=60
declare -A COMPONENTS
{% for component in _template.metacluster.components %}
COMPONENTS["{{ component.name }}\t({{ component.url }})"]="{{ component.healthcheck }}"
{% endfor %}
I=0
while /bin/true; do
if [[ $I -gt 59 ]]; then
@@ -30,13 +33,13 @@ while /bin/true; do
echo -e "${PRST}" > /dev/tty1
echo -e "\n\n\t${DFLT}To manage this appliance, please connect to one of the following:${LCLR}\n" > /dev/tty1
for c in "${COMPONENTS[@]}"; do
STATUS=$(curl -ks "https://${c}.${FQDN}" -o /dev/null -w '%{http_code}')
for c in $( echo "${!COMPONENTS[@]}" | tr ' ' $'\n' | sort); do
STATUS=$(curl -kLs "${COMPONENTS[${c}]}" -o /dev/null -w '%{http_code}')
if [[ "${STATUS}" -eq "200" ]]; then
echo -e "\t [${BGRN}+${DFLT}] ${BBLU}https://${c}.${FQDN}${DFLT}${LCLR}" > /dev/tty1
echo -e "\t [${BGRN}+${DFLT}] ${BBLU}${c}${DFLT}${LCLR}" > /dev/tty1
else
echo -e "\t [${BRED}-${DFLT}] ${BBLU}https://${c}.${FQDN}${DFLT}${LCLR}" > /dev/tty1
echo -e "\t [${BRED}-${DFLT}] ${BBLU}${c}${DFLT}${LCLR}" > /dev/tty1
fi
done

View File

@@ -1,6 +1,23 @@
playbook:
retries: 5
delays:
long: 60
medium: 30
short: 10
retries: 10
delay:
# These values are multiplied with the value of `storage_benchmark`
long: 2
medium: 1
short: 0.5
# This default value is updated during the playbook, based on an I/O intensive operation
storage_benchmark: 30
config:
clusterapi:
size_matrix:
small:
cpu: 2
memory: 6144
medium:
cpu: 4
memory: 8192
large:
cpu: 8
memory: 16384

View File

@@ -13,7 +13,8 @@
- users
- disks
- metacluster
# - workloadcluster
- workloadcluster
- decommission
- tty
- cleanup
handlers:

View File

@@ -0,0 +1,35 @@
- name: Cordon node
kubernetes.core.k8s_drain:
name: "{{ decom_nodename }}"
state: cordon
kubeconfig: "{{ kubeconfig.path }}"
- name: Drain node
# Currently does not work; returns with error "Failed to delete pod [...] due to: Too Many Requests"
# See also: https://github.com/ansible-collections/kubernetes.core/issues/474
# kubernetes.core.k8s_drain:
# name: "{{ decom_nodename }}"
# state: drain
# delete_options:
# ignore_daemonsets: true
# delete_emptydir_data: true
# kubeconfig: "{{ kubeconfig.path }}"
ansible.builtin.shell:
cmd: >-
kubectl drain {{ decom_nodename }} \
--delete-emptydir-data \
--ignore-daemonsets
register: nodedrain_results
until:
- nodedrain_results is not failed
- (nodedrain_results.stdout_lines | last) is match('node/.* drained')
retries: "{{ playbook.retries }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
- name: Delete node
kubernetes.core.k8s:
name: "{{ decom_nodename }}"
kind: node
state: absent
wait: true
kubeconfig: "{{ kubeconfig.path }}"

View File

@@ -0,0 +1,18 @@
- name: Lookup node name and moref-id for decommissioning
ansible.builtin.set_fact:
decom_nodename: >-
{{
lookup('kubernetes.core.k8s', kind='Node', kubeconfig=(kubeconfig.path)) |
json_query('[? metadata.name != `' ~ ansible_facts.nodename ~ '`].metadata.name') |
first
}}
decom_morefid: >-
{{
lookup('kubernetes.core.k8s', kind='Node', kubeconfig=(kubeconfig.path)) |
json_query('[? metadata.name != `' ~ ansible_facts.nodename ~ '`].metadata.labels."ova.airgappedk8s/moref_id"') |
first
}}
- import_tasks: storage.yml
- import_tasks: k3s.yml
- import_tasks: virtualmachine.yml

View File

@@ -0,0 +1,27 @@
- name: Disable disk scheduling and evict replicas
kubernetes.core.k8s:
api_version: longhorn.io/v1beta2
kind: lhn
name: "{{ decom_nodename }}"
namespace: longhorn-system
state: patched
definition: |
spec:
allowScheduling: false
evictionRequested: true
kubeconfig: "{{ kubeconfig.path }}"
- name: Reduce replica amount for each volume
kubernetes.core.k8s:
api_version: longhorn.io/v1beta2
kind: volume
name: "{{ item.metadata.name }}"
namespace: longhorn-system
state: patched
definition: |
spec:
numberOfReplicas: {{ (lookup('kubernetes.core.k8s', kind='node', kubeconfig=(kubeconfig.path)) | length | int) - 1 }}
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ query('kubernetes.core.k8s', api_version='longhorn.io/v1beta2', kind='volume', namespace='longhorn-system', kubeconfig=(kubeconfig.path)) }}"
loop_control:
label: "{{ item.metadata.name }}"

View File

@@ -0,0 +1,26 @@
- block:
- name: Lookup VM name
community.vmware.vmware_guest_info:
moid: "{{ decom_morefid }}"
register: virtualmachine_details
- name: Power off VM
community.vmware.vmware_guest:
name: "{{ virtualmachine_details.instance.hw_name }}"
folder: "{{ virtualmachine_details.instance.hw_folder }}"
state: poweredoff
# - name: Delete VM
# community.vmware.vmware_guest:
# name: "{{ virtualmachine_details.hw_name }}"
# folder: "{{ virtualmachine_details.hw_folder }}"
# state: absent
module_defaults:
group/vmware:
hostname: "{{ vapp['hv.fqdn'] }}"
validate_certs: no
username: "{{ vapp['hv.username'] }}"
password: "{{ vapp['hv.password'] }}"
datacenter: "{{ vcenter_info.datacenter }}"

View File

@@ -0,0 +1,52 @@
- block:
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: values_file
- name: Lookup current chart values
kubernetes.core.helm_info:
name: step-certificates
namespace: step-ca
kubeconfig: "{{ kubeconfig.path }}"
register: stepca_values
- name: Write chart values w/ password to tempfile
ansible.builtin.copy:
dest: "{{ values_file.path }}"
content: "{{ stepca_values.status | json_query('values') | to_yaml }}"
no_log: true
- name: Upgrade step-ca chart
kubernetes.core.helm:
name: step-certificates
chart_ref: /opt/metacluster/helm-charts/step-certificates
release_namespace: step-ca
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values_files:
- "{{ values_file.path }}"
- name: Cleanup tempfile
ansible.builtin.file:
path: "{{ values_file.path }}"
state: absent
when: values_file.path is defined
- name: Ensure step-ca API availability
ansible.builtin.uri:
url: https://ca.{{ vapp['metacluster.fqdn'] }}/health
method: GET
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'ok'
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,50 @@
- block:
- name: Push images to registry
ansible.builtin.shell:
cmd: >-
skopeo copy \
--insecure-policy \
--dest-tls-verify=false \
--dest-creds admin:{{ vapp['metacluster.password'] }} \
docker-archive:./{{ item | basename }} \
docker://registry.{{ vapp['metacluster.fqdn'] }}/library/$( \
skopeo list-tags \
--insecure-policy \
docker-archive:./{{ item | basename }} | \
jq -r '.Tags[0]')
chdir: /opt/metacluster/container-images/
register: push_result
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
retries: "{{ playbook.retries }}"
delay: "{{ ((storage_benchmark | float) * playbook.delay.short) | int }}"
until: push_result is not failed
- name: Get all stored container images (=artifacts)
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/search?q=library
method: GET
register: registry_artifacts
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[1]]) | unique | sort }}"
loop: "{{ registry_artifacts.json.repository | json_query('[*].repository_name') }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
data: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201, 401]
body_format: json

View File

@@ -0,0 +1,27 @@
- block:
- name: Upgrade gitea chart
kubernetes.core.helm:
name: gitea
chart_ref: /opt/metacluster/helm-charts/gitea
release_namespace: gitea
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['gitea'].chart_values }}"
- name: Ensure gitea API availability
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/healthz
method: GET
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'pass'
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,26 @@
- block:
- name: Upgrade argo-cd chart
kubernetes.core.helm:
name: argo-cd
chart_ref: /opt/metacluster/helm-charts/argo-cd
release_namespace: argo-cd
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['argo-cd'].chart_values }}"
- name: Ensure argo-cd API availability
ansible.builtin.uri:
url: https://gitops.{{ vapp['metacluster.fqdn'] }}/api/version
method: GET
register: api_readycheck
until:
- api_readycheck.json.Version is defined
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -1,7 +1,7 @@
- name: Configure fallback name resolution
ansible.builtin.lineinfile:
path: /etc/hosts
line: "{{ vapp['metacluster.vip'] }} {{ item + '.' + vapp['metacluster.fqdn'] }}"
line: "{{ vapp['metacluster.vip'] }} {{ item ~ '.' ~ vapp['metacluster.fqdn'] }}"
state: present
loop:
# TODO: Make this list dynamic
@@ -28,3 +28,8 @@
- name: Update certificate truststore
ansible.builtin.command:
cmd: update-ca-certificates
- name: Remove redundant files
ansible.builtin.file:
path: /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
state: absent

View File

@@ -27,7 +27,8 @@
chdir: /opt/metacluster/k3s
environment:
INSTALL_K3S_SKIP_DOWNLOAD: 'true'
INSTALL_K3S_EXEC: "server --token {{ vapp['metacluster.token'] | trim }} --server https://{{ vapp['metacluster.vip'] }}:6443 --disable local-storage --config /etc/rancher/k3s/config.yaml"
# To prevent from overwriting traefik's existing configuration, "disable" it on this new node
INSTALL_K3S_EXEC: "server --token {{ vapp['metacluster.token'] | trim }} --server https://{{ vapp['metacluster.vip'] }}:6443 --disable local-storage,traefik --config /etc/rancher/k3s/config.yaml"
when: ansible_facts.services['k3s.service'] is undefined
- name: Ensure API availability
@@ -39,7 +40,7 @@
register: api_readycheck
until: api_readycheck.json.apiVersion is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.medium }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"
- name: Install kubectl tab-completion
ansible.builtin.shell:
@@ -61,3 +62,19 @@
content: "{{ kubectl_config.stdout }}"
mode: 0600
no_log: true
- name: Add label to node object
kubernetes.core.k8s:
name: "{{ ansible_facts.nodename | lower }}"
kind: Node
state: patched
definition:
metadata:
labels:
ova.airgappedk8s/moref_id: "{{ moref_id }}"
kubeconfig: "{{ kubeconfig.path }}"
register: k8snode_patch
until:
- k8snode_patch.result.metadata.labels['ova.airgappedk8s/moref_id'] is defined
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"

View File

@@ -1,9 +1,9 @@
- import_tasks: init.yml
- import_tasks: registry.yml
- import_tasks: containerimages.yml
- import_tasks: k3s.yml
- import_tasks: assets.yml
# - import_tasks: ingress.yml
- import_tasks: storage.yml
# - import_tasks: certauthority.yml
# - import_tasks: git.yml
# - import_tasks: gitops.yml
- import_tasks: registry.yml
- import_tasks: certauthority.yml
- import_tasks: git.yml
- import_tasks: gitops.yml

View File

@@ -1,47 +1,24 @@
- block:
- name: Push images to registry
ansible.builtin.shell:
cmd: >-
skopeo copy \
--insecure-policy \
--dest-tls-verify=false \
--dest-creds admin:{{ vapp['metacluster.password'] }} \
docker-archive:./{{ item | basename }} \
docker://registry.{{ vapp['metacluster.fqdn'] }}/library/$( \
skopeo list-tags \
--insecure-policy \
docker-archive:./{{ item | basename }} | \
jq -r '.Tags[0]')
chdir: /opt/metacluster/container-images/
register: push_result
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
until: push_result is not failed
- name: Upgrade harbor chart
kubernetes.core.helm:
name: harbor
chart_ref: /opt/metacluster/helm-charts/harbor
release_namespace: harbor
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['harbor'].chart_values }}"
- name: Get all stored container images (=artifacts)
- name: Ensure harbor API availability
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/search?q=library
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/health
method: GET
register: registry_artifacts
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[1]]) | unique | sort }}"
loop: "{{ registry_artifacts.json.repository | json_query('[*].repository_name') }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
data: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'healthy'
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:

View File

@@ -1,14 +1,53 @@
- name: Increase replicas for each volume
kubernetes.core.k8s:
api_version: longhorn.io/v1beta2
kind: volume
name: "{{ item.metadata.name }}"
namespace: longhorn-system
state: patched
definition: |
spec:
numberOfReplicas: {{ lookup('kubernetes.core.k8s', kind='node', kubeconfig=(kubeconfig.path)) | length | int }}
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ lookup('kubernetes.core.k8s', api_version='longhorn.io/v1beta2', kind='volume', namespace='longhorn-system', kubeconfig=(kubeconfig.path)) }}"
loop_control:
label: "{{ item.metadata.name }}"
- block:
- name: Increase replicas for each volume
kubernetes.core.k8s:
api_version: longhorn.io/v1beta2
kind: volume
name: "{{ item.metadata.name }}"
namespace: longhorn-system
state: patched
definition: |
spec:
numberOfReplicas: {{ lookup('kubernetes.core.k8s', kind='node', kubeconfig=(kubeconfig.path)) | length | int }}
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ query('kubernetes.core.k8s', api_version='longhorn.io/v1beta2', kind='volume', namespace='longhorn-system', kubeconfig=(kubeconfig.path)) }}"
loop_control:
label: "{{ item.metadata.name }}"
- name: Wait for replica rebuilds to complete
ansible.builtin.uri:
url: https://storage.{{ vapp['metacluster.fqdn'] }}/v1/volumes
method: GET
register: volume_details
until:
- volume_details.json is defined
- (volume_details.json | json_query('data[? state==`attached`].robustness') | unique | length) == 1
- (volume_details.json | json_query('data[? state==`attached`].robustness') | first) == "healthy"
retries: "{{ ( playbook.retries * 2) | int }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
- name: Upgrade longhorn chart
kubernetes.core.helm:
name: longhorn
chart_ref: /opt/metacluster/helm-charts/longhorn
release_namespace: longhorn-system
wait: false
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components['longhorn'].chart_values }}"
- name: Ensure longhorn API availability
ansible.builtin.uri:
url: https://storage.{{ vapp['metacluster.fqdn'] }}/v1
method: GET
register: api_readycheck
until:
- api_readycheck is not failed
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.long | int) }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -1,2 +1,3 @@
- import_tasks: vapp.yml
- import_tasks: vcenter.yml
- import_tasks: metacluster.yml

View File

@@ -4,3 +4,8 @@
method: GET
validate_certs: no
status_code: [200, 401]
register: api_readycheck
until:
- api_readycheck.json.apiVersion is defined
retries: "{{ playbook.retries }}"
delay: "{{ (storage_benchmark | int) * (playbook.delay.medium | int) }}"

View File

@@ -0,0 +1,20 @@
- name: Check for expected vApp properties
ansible.builtin.assert:
that:
- vapp[item] is defined
- (vapp[item] | length) > 0
quiet: true
loop:
- guestinfo.dnsserver
- guestinfo.gateway
- guestinfo.hostname
- guestinfo.ipaddress
- guestinfo.prefixlength
- guestinfo.rootsshkey
- hv.fqdn
- hv.password
- hv.username
- metacluster.fqdn
- metacluster.password
- metacluster.token
- metacluster.vip

View File

@@ -0,0 +1,4 @@
- import_tasks: hypervisor.yml
- import_tasks: registry.yml
- import_tasks: nodetemplates.yml
# - import_tasks: clusterapi.yml

View File

@@ -0,0 +1,17 @@
- block:
- name: Copy kubeadm container images to dedicated project
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/projects/kubeadm/repositories/{{ ( item | regex_findall('([^:/]+)') )[-2] }}/artifacts?from=library/{{ item | replace('/', '%2F') | replace(':', '%3A') }}
method: POST
headers:
Authorization: "Basic {{ ('admin:' ~ vapp['metacluster.password']) | b64encode }}"
body:
from: "{{ item }}"
loop: "{{ lookup('ansible.builtin.file', '/opt/metacluster/cluster-api/imagelist').splitlines() }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201, 409]
body_format: json

View File

@@ -1,4 +1,10 @@
#!/bin/bash
# Workaround for ansible output regression
export PYTHONUNBUFFERED=1
# Apply firstboot configuration w/ ansible
/usr/local/bin/ansible-playbook -e "PYTHONUNBUFFERED=1" /opt/firstboot/ansible/playbook.yml | tee -a /var/log/firstboot.log > /dev/tty1 2>&1
/usr/local/bin/ansible-playbook /opt/firstboot/ansible/playbook.yml | tee -a /var/log/firstboot.log > /dev/tty1 2>&1
# Cleanup console
clear > /dev/tty1

View File

@@ -1,17 +1,7 @@
- name: Disable tty logins
import_tasks: tty.yml
- name: Remove snapd
import_tasks: snapd.yml
- name: Remove cloud-init
import_tasks: cloud-init.yml
- name: Configure default logging
import_tasks: logging.yml
- name: Configure services
import_tasks: services.yml
- name: Install packages
import_tasks: packages.yml
- import_tasks: tty.yml
- import_tasks: snapd.yml
- import_tasks: cloud-init.yml
- import_tasks: logging.yml
- import_tasks: services.yml
- import_tasks: packages.yml
- import_tasks: sysctl.yml

View File

@@ -37,9 +37,30 @@
state: directory
- name: Configure Ansible defaults
ansible.builtin.template:
src: ansible.j2
ansible.builtin.copy:
dest: /etc/ansible/ansible.cfg
content: |
[defaults]
callbacks_enabled = ansible.posix.profile_tasks
force_color = true
stdout_callback = community.general.diy
[callback_diy]
[callback_profile_tasks]
task_output_limit = 0
- name: Create default shell aliases
ansible.builtin.lineinfile:
path: ~/.bashrc
state: present
line: "{{ item }}"
insertafter: EOF
loop:
- alias k="kubectl"
- alias less="less -rf"
loop_control:
label: "{{ (item | regex_findall('([^ =\"]+)'))[2] }}"
- name: Cleanup
ansible.builtin.apt:

View File

@@ -0,0 +1,11 @@
- name: Configure inotify limits
ansible.posix.sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
loop:
- name: fs.inotify.max_user_instances
value: '512'
- name: fs.inotify.max_user_watches
value: '524288'
loop_control:
label: "{{ item.name ~ '=' ~ item.value }}"

View File

@@ -1,2 +0,0 @@
[defaults]
callbacks_enabled = ansible.posix.profile_tasks

View File

@@ -1,13 +1,8 @@
platform:
k3s:
version: v1.26.0+k3s1
gitops:
repository:
uri: https://code.spamasaurus.com/djpbessems/GitOps.MetaCluster.git
# revision: v0.1.0
revision: HEAD
version: v1.30.0+k3s1
# version: v1.27.1+k3s1
packaged_components:
- name: traefik
@@ -19,33 +14,31 @@ platform:
- "--certificatesResolvers.stepca.acme.storage=/data/acme.json"
- "--certificatesResolvers.stepca.acme.tlsChallenge=true"
- "--certificatesresolvers.stepca.acme.certificatesduration=24"
deployment:
initContainers:
- name: volume-permissions
image: busybox:1
command: ["sh", "-c", "touch /data/acme.json && chmod -Rv 600 /data/* && chown 65532:65532 /data/acme.json"]
volumeMounts:
- name: data
mountPath: /data
globalArguments: []
ingressRoute:
dashboard:
enabled: false
persistence:
enabled: true
ports:
ssh:
port: 8022
protocol: TCP
web:
redirectTo: websecure
redirectTo:
port: websecure
websecure:
tls:
certResolver: stepca
updateStrategy:
type: Recreate
rollingUpdate: null
helm_repositories:
- name: argo
url: https://argoproj.github.io/argo-helm
- name: bitnami
url: https://charts.bitnami.com/bitnami
- name: dexidp
url: https://charts.dexidp.io
- name: gitea-charts
url: https://dl.gitea.io/charts/
- name: harbor
@@ -54,81 +47,172 @@ platform:
url: https://charts.jetstack.io
- name: longhorn
url: https://charts.longhorn.io
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
- name: smallstep
url: https://smallstep.github.io/helm-charts/
- name: spamasaurus
url: https://code.spamasaurus.com/api/packages/djpbessems/helm
components:
argo-cd:
helm:
# version: 4.9.7 # (= ArgoCD v2.4.2)
version: 5.14.1 # (= ArgoCD v2.5.2)
# Must match the version referenced at `dependencies.static_binaries[.filename==argo].url`
version: 6.7.7 # (=Argo CD v2.10.5)
chart: argo/argo-cd
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
configs:
cm:
resource.compareoptions: |
ignoreAggregatedRoles: true
resource.customizations.ignoreDifferences.all: |
jsonPointers:
- /spec/conversion/webhook/clientConfig/caBundle
params:
server.insecure: true
secret:
argocdServerAdminPassword: "{{ vapp['metacluster.password'] | password_hash('bcrypt') }}"
global:
domain: gitops.{{ vapp['metacluster.fqdn'] | lower }}
server:
extraArgs:
- --insecure
ingress:
enabled: true
argo-workflows:
helm:
version: 0.41.8 # (=Argo Workflows v3.5.7)
chart: argo/argo-workflows
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
# workflow:
# serviceAccount:
# create: true
# name: "argo-workflows"
# rbac:
# create: true
controller:
workflowNamespaces:
- default
- firstboot
server:
authModes:
- server
ingress:
enabled: true
hosts:
- gitops.{{ vapp['metacluster.fqdn'] }}
- workflow.{{ vapp['metacluster.fqdn']}}
paths:
- /
pathType: Prefix
cert-manager:
helm:
version: 1.10.1
version: 1.14.4
chart: jetstack/cert-manager
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
# chart_values: !unsafe |
# installCRDs: true
chart_values: !unsafe |
installCRDs: true
clusterapi:
management:
version:
# Must match the version referenced at `dependencies.static_binaries[.filename==clusterctl].url`
base: v1.3.2
base: v1.6.3
# Must match the version referenced at `components.cert-manager.helm.version`
cert_manager: v1.10.1
infrastructure_vsphere: v1.5.1
ipam_incluster: v0.1.0-alpha.1
cert_manager: v1.14.4
infrastructure_vsphere: v1.9.2
ipam_incluster: v0.1.0
# Refer to `https://console.cloud.google.com/gcr/images/cloud-provider-vsphere/GLOBAL/cpi/release/manager` for available tags
cpi_vsphere: v1.30.1
workload:
version:
calico: v3.24.5
# k8s: v1.25.5
k8s: v1.23.5
calico: v3.27.3
k8s: v1.30.1
node_template:
# Refer to `https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/v1.3.5/README.md#kubernetes-versions-with-published-ovas` for a list of supported node templates
# url: https://storage.googleapis.com/capv-templates/v1.25.5/ubuntu-2004-kube-v1.25.5.ova
url: https://storage.googleapis.com/capv-images/release/v1.23.5/ubuntu-2004-kube-v1.23.5.ova
# Not used anymore; should be uploaded to hypervisor manually!
# https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/templates%2Fv1.30.0/
dex:
helm:
version: 0.15.3 # (= Dex 2.37.0)
chart: dexidp/dex
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
config:
issuer: https://idps.{{ vapp['metacluster.fqdn'] }}
storage:
type: kubernetes
config:
inCluster: true
staticClients:
- id: pinniped-supervisor
secret: "{{ lookup('ansible.builtin.password', '/dev/null length=64 chars=ascii_lowercase,digits seed=' ~ vapp['metacluster.fqdn']) }}"
name: Pinniped Supervisor client
redirectURIs:
- https://auth.{{ vapp['metacluster.fqdn'] }}/sso/callback
enablePasswordDB: true
staticPasswords:
- email: user@{{ vapp['metacluster.fqdn'] }}
hash: "{{ vapp['metacluster.password'] | password_hash('bcrypt') }}"
username: user
userID: "{{ lookup('ansible.builtin.password', '/dev/null length=64 chars=ascii_lowercase,digits seed=' ~ vapp['metacluster.fqdn']) | to_uuid }}"
ingress:
enabled: true
hosts:
- host: idps.{{ vapp['metacluster.fqdn'] }}
paths:
- path: /
pathType: Prefix
gitea:
helm:
version: v6.0.3 # (= Gitea v1.17.3)
version: v10.1.3 # (= Gitea v1.21.7)
chart: gitea-charts/gitea
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | sed '/:/!s/$/:latest/'
chart_values: !unsafe |
extraVolumes:
- secret:
defaultMode: 420
secretName: step-certificates-certs
name: step-certificates-certs
extraVolumeMounts:
- mountPath: /etc/ssl/certs/ca-chain.crt
name: step-certificates-certs
readOnly: true
subPath: ca_chain.crt
gitea:
admin:
username: administrator
password: "{{ vapp['metacluster.password'] }}"
email: admin@{{ vapp['metacluster.fqdn'] }}
email: administrator@{{ vapp['metacluster.fqdn'] | lower }}
config:
cache:
ADAPTER: memory
server:
OFFLINE_MODE: true
PROTOCOL: http
ROOT_URL: https://git.{{ vapp['metacluster.fqdn'] }}/
ROOT_URL: https://git.{{ vapp['metacluster.fqdn'] | lower }}/
session:
PROVIDER: db
image:
pullPolicy: IfNotPresent
ingress:
enabled: true
hosts:
- host: git.{{ vapp['metacluster.fqdn'] }}
- host: git.{{ vapp['metacluster.fqdn'] | lower }}
paths:
- path: /
pathType: Prefix
postgresql:
enabled: true
image:
tag: 16.1.0-debian-11-r25
postgresql-ha:
enabled: false
redis-cluster:
enabled: false
service:
ssh:
type: ClusterIP
@@ -137,7 +221,7 @@ components:
harbor:
helm:
version: 1.10.2 # (= Harbor v2.6.2)
version: 1.14.1 # (= Harbor v2.10.1)
chart: harbor/harbor
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
@@ -145,11 +229,11 @@ components:
ingress:
annotations: {}
hosts:
core: registry.{{ vapp['metacluster.fqdn'] }}
core: registry.{{ vapp['metacluster.fqdn'] | lower }}
tls:
certSource: none
enabled: false
externalURL: https://registry.{{ vapp['metacluster.fqdn'] }}
externalURL: https://registry.{{ vapp['metacluster.fqdn'] | lower }}
harborAdminPassword: "{{ vapp['metacluster.password'] }}"
notary:
enabled: false
@@ -158,53 +242,98 @@ components:
registry:
size: 25Gi
json-server:
helm:
version: v0.8.4
chart: spamasaurus/json-server
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
ingress:
enabled: true
hosts:
- host: version.{{ vapp['metacluster.fqdn'] }}
paths:
- path: /
pathType: Prefix
jsonServer:
image:
repository: code.spamasaurus.com/djpbessems/json-server
seedData:
configInline: {}
sidecar:
targetUrl: version.{{ vapp['metacluster.fqdn'] }}
image:
repository: code.spamasaurus.com/djpbessems/json-server
kube-prometheus-stack:
helm:
version: 45.2.0
chart: prometheus-community/kube-prometheus-stack
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
alertmanager:
enabled: false
global:
imageRegistry: registry.{{ vapp['metacluster.fqdn'] }}
kubevip:
# Must match the version referenced at `dependencies.container_images`
version: v0.5.8
version: v0.6.3
longhorn:
helm:
version: 1.4.0
version: 1.5.4
chart: longhorn/longhorn
parse_logic: cat values.yaml | yq eval '.. | select(has("repository")) | .repository + ":" + .tag'
chart_values: !unsafe |
defaultSettings:
allowNodeDrainWithLastHealthyReplica: true
concurrentReplicaRebuildPerNodeLimit: 10
defaultDataPath: /mnt/blockstorage
defaultReplicaCount: 1
logLevel: Info
nodeDrainPolicy: block-for-eviction-if-contains-last-replica
replicaSoftAntiAffinity: true
priorityClass: system-node-critical
storageOverProvisioningPercentage: 200
storageReservedPercentageForDefaultDisk: 0
ingress:
enabled: true
host: storage.{{ vapp['metacluster.fqdn'] }}
persistence:
defaultClassReplicaCount: 1
host: storage.{{ vapp['metacluster.fqdn'] | lower }}
longhornManager:
priorityClass: system-node-critical
longhornDriver:
priorityClass: system-node-critical
pinniped:
helm:
version: 1.3.10 # (= Pinniped v0.27.0)
chart: bitnami/pinniped
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
concierge:
enabled: false
supervisor:
service:
public:
type: ClusterIP
local-user-authenticator:
# Must match the appVersion (!=chart version) referenced at `components.pinniped.helm.version`
version: v0.27.0
users:
- username: metauser
password: !unsafe "{{ vapp['metacluster.password'] | password_hash('bcrypt') }}"
- username: metaguest
password: !unsafe "{{ vapp['metacluster.password'] | password_hash('bcrypt') }}"
step-certificates:
helm:
# version: 1.18.2+20220324
version: 1.23.0
version: 1.25.2 # (= step-ca v0.25.2)
chart: smallstep/step-certificates
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sed '/:/!s/$/:latest/' | sort -u
chart_values: !unsafe |
ca:
bootstrap:
postInitHook: |
echo '{{ vapp["metacluster.password"] }}' > ~/pwfile
step ca provisioner add acme \
--type ACME \
--password-file=~/pwfile \
--force-cn
rm ~/pwfile
dns: ca.{{ vapp['metacluster.fqdn'] }},step-certificates.step-ca.svc.cluster.local,127.0.0.1
password: "{{ vapp['metacluster.password'] }}"
provisioner:
name: admin
password: "{{ vapp['metacluster.password'] }}"
inject:
secrets:
ca_password: "{{ vapp['metacluster.password'] | b64encode }}"
provisioner_password: "{{ vapp['metacluster.password'] | b64encode }}"
service:
targetPort: 9000
dependencies:
@@ -215,42 +344,50 @@ dependencies:
- community.general
- community.vmware
- kubernetes.core
- lvrfrc87.git_acp
container_images:
# This should match the image tag referenced at `platform.packaged_components[.name==traefik].config`
- busybox:1
- ghcr.io/kube-vip/kube-vip:v0.5.8
- ghcr.io/kube-vip/kube-vip:v0.6.3
# The following list is generated by running the following commands:
# $ clusterctl init -i vsphere:<version> [...]
# $ clusterctl generate cluster <name> [...] | yq eval '.data.data' | yq --no-doc eval '.. | .image? | select(.)' | sort -u
- gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.18.1
- gcr.io/cloud-provider-vsphere/csi/release/driver:v2.1.0
- gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.1.0
- quay.io/k8scsi/csi-attacher:v3.0.0
- quay.io/k8scsi/csi-node-driver-registrar:v2.0.1
- quay.io/k8scsi/csi-provisioner:v2.0.0
- quay.io/k8scsi/livenessprobe:v2.1.0
- gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.27.0
- gcr.io/cloud-provider-vsphere/csi/release/driver:v3.1.0
- gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.1.0
- registry.k8s.io/sig-storage/csi-attacher:v4.3.0
- registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0
- registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
- registry.k8s.io/sig-storage/csi-resizer:v1.8.0
- registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
- registry.k8s.io/sig-storage/livenessprobe:v2.10.0
static_binaries:
- filename: argo
url: https://github.com/argoproj/argo-workflows/releases/download/v3.5.7/argo-linux-amd64.gz
- filename: clusterctl
url: https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.3.2/clusterctl-linux-amd64
url: https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-amd64
- filename: govc
url: https://github.com/vmware/govmomi/releases/download/v0.29.0/govc_Linux_x86_64.tar.gz
url: https://github.com/vmware/govmomi/releases/download/v0.36.3/govc_Linux_x86_64.tar.gz
archive: compressed
- filename: helm
url: https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz
url: https://get.helm.sh/helm-v3.14.3-linux-amd64.tar.gz
archive: compressed
extra_opts: --strip-components=1
- filename: npp-prepper
url: https://code.spamasaurus.com/api/packages/djpbessems/generic/npp-prepper/v0.4.5/npp-prepper
- filename: kubectl-slice
url: https://github.com/patrickdappollonio/kubectl-slice/releases/download/v1.2.9/kubectl-slice_linux_x86_64.tar.gz
archive: compressed
- filename: pinniped
url: https://github.com/vmware-tanzu/pinniped/releases/download/v0.25.0/pinniped-cli-linux-amd64
- filename: skopeo
url: https://code.spamasaurus.com/api/packages/djpbessems/generic/skopeo/v1.11.0-dev/skopeo
url: https://code.spamasaurus.com/api/packages/djpbessems/generic/skopeo/v1.12.0/skopeo_linux_amd64
- filename: step
url: https://dl.step.sm/gh-release/cli/gh-release-header/v0.23.0/step_linux_0.23.0_amd64.tar.gz
url: https://dl.step.sm/gh-release/cli/gh-release-header/v0.25.2/step_linux_0.25.2_amd64.tar.gz
archive: compressed
extra_opts: --strip-components=2
- filename: yq
url: http://github.com/mikefarah/yq/releases/download/v4.30.5/yq_linux_amd64
url: https://github.com/mikefarah/yq/releases/download/v4.43.1/yq_linux_amd64
packages:
apt:

View File

@@ -0,0 +1,47 @@
downstream:
helm_repositories:
- name: bitnami
url: https://charts.bitnami.com/bitnami
- name: longhorn
url: https://charts.longhorn.io
- name: sealed-secrets
url: https://bitnami-labs.github.io/sealed-secrets
helm_charts:
longhorn:
version: 1.5.4
chart: longhorn/longhorn
namespace: longhorn-system
parse_logic: cat values.yaml | yq eval '.. | select(has("repository")) | .repository + ":" + .tag'
chart_values: !unsafe |
defaultSettings:
createDefaultDiskLabeledNodes: true
defaultDataPath: /mnt/blockstorage
pinniped:
version: 1.3.10 # (= Pinniped v0.27.0)
chart: bitnami/pinniped
namespace: pinniped-concierge
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
supervisor:
enabled: false
extra_manifests:
- src: jwtauthenticator.j2
_template:
name: metacluster-sso
spec: !unsafe |2
issuer: https://auth.{{ vapp['metacluster.fqdn'] }}/sso
audience: "{{ vapp['workloadcluster.name'] | lower }}"
tls:
certificateAuthorityData: "{{ (stepca_cm_certs.resources[0].data['intermediate_ca.crt'] ~ _newline ~ stepca_cm_certs.resources[0].data['root_ca.crt']) | b64encode }}"
sealed-secrets:
version: 2.8.1 # (= Sealed Secrets v0.20.2)
chart: sealed-secrets/sealed-secrets
namespace: sealed-secrets
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
# chart_values: !unsafe |
# # Empty

78
deployment/playbook.yml Normal file
View File

@@ -0,0 +1,78 @@
- hosts: localhost
vars_files:
- vars/ova.bootstrap.yaml
- vars/hv.vcenter.yaml
- vars/pb.secrets.yaml
tasks:
- name: Retrieve target folder details
community.vmware.vmware_vm_info:
hostname: "{{ hv.hostname }}"
username: "{{ hv.username }}"
password: "{{ secrets.hv.password }}"
folder: "{{ hv.folder }}"
validate_certs: false
register: vm_info
- name: User prompt
ansible.builtin.pause:
prompt: Virtual machine '{{ appliance.id }}' already exists. Delete to continue [yes] or abort [no]?"
register: prompt
until:
- prompt.user_input in ['yes', 'no']
delay: 0
when: (vm_info | selectattr('guest_name', 'equalto', appliance.id) | length) > 0
- name: Destroy existing VM
community.vmware.vmware_guest:
hostname: "{{ hv.hostname }}"
username: "{{ hv.username }}"
password: "{{ secrets.hv.password }}"
folder: "{{ hv.folder }}"
name: appliance.id
state: absent
when:
- (vm_info | selectattr('guest_name', 'equalto', appliance.id) | length) > 0
- (prompt.user_input | bool) == true
- name: Deploy VM from OVA-template
community.vmware.vmware_deploy_ovf:
hostname: "{{ hv.hostname }}"
username: "{{ hv.username }}"
password: "{{ secrets.hv.password }}"
validate_certs: false
datacenter: "{{ hv.datacenter }}"
folder: "{{ hv.folder }}"
cluster: "{{ hv.cluster }}"
name: airgapped-k8s-meta1
datastore: "{{ hv.datastore }}"
disk_provisioning: thin
networks:
"LAN": "{{ hv.network }}"
power_on: yes
ovf: "{{ appliance.path }}/{{ appliance.filename }}"
deployment_option: cp1w1ws0
properties:
metacluster.fqdn: k8s.lab
metacluster.vip: 192.168.154.125
metacluster.token: "{{ secrets.appliance.installtoken }}"
# guestinfo.hostname: _default
metacluster.password: "{{ secrets.appliance.password }}"
guestinfo.ipaddress: 192.168.154.126
guestinfo.prefixlength: '24'
guestinfo.dnsserver: 192.168.154.225
guestinfo.gateway: 192.168.154.1
# workloadcluster.name: _default
workloadcluster.vip: 192.168.154.130
ippool.startip: 192.168.154.135
ippool.endip: 192.168.154.140
workloadcluster.nodetemplate: ubuntu-2204-kube-v1.30.0
workloadcluster.nodesize: small
# workloadcluster.additionaldisk: '75'
guestinfo.rootsshkey: ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAiRc7Og+cRJGFwdUzgpX9YqvVenTk54N4kqM7emEfYHdsJLMjKQyxr8hklHmsam5dzxx3itFzc6SLf/ldJJ2JZuzE5FiCqUXXv4UFwN6HF5xqn7PTLicvWZH93H4m1gOlD5Dfzi4Es34v5zRBwbMScOgekk/LweTgl35jGKDgMP5DjGTqkPf7Ndh9+iuQrz99JEr8egl3bj+jIlKjScfaQbbnu3AJIRwZwTKgw0AOkLliQdEPNLvG5/ZImxJG4oHV9/uNkfdJObLjT1plR1HbVNskV5fuRNE/vnUiWl9jAJ1RT83GOqV0sQ+Q7p214fkgqb3JPvci/s0Bb7RA85hBEQ== bessems.eu
hv.fqdn: "{{ hv.hostname }}"
hv.username: "{{ hv.username }}"
hv.password: "{{ secrets.hv.password }}"
ldap.fqdn: _unused
ldap.dn: _unused
ldap.password: _unused

View File

@@ -0,0 +1,5 @@
collections:
# - ansible.posix
# - ansible.utils
# - community.general
- community.vmware

View File

@@ -1,21 +1,30 @@
packer {
required_plugins {
vsphere = {
source = "github.com/hashicorp/vsphere"
version = "~> 1"
}
ansible = {
source = "github.com/hashicorp/ansible"
version = "~> 1"
}
}
}
build {
source "vsphere-iso.ubuntu" {
name = "bootstrap"
vm_name = "${var.vm_name}-bootstrap"
vm_name = "bld_${var.vm_name}_bootstrap"
}
source "vsphere-iso.ubuntu" {
name = "upgrade"
vm_name = "${var.vm_name}-upgrade"
vm_name = "bld_${var.vm_name}_upgrade"
}
provisioner "ansible" {
pause_before = "2m30s"
pause_before = "45s"
playbook_file = "ansible/playbook.yml"
user = "ubuntu"
@@ -24,11 +33,16 @@ build {
"PYTHONUNBUFFERED=1"
]
use_proxy = "false"
collections_path = "ansible/collections"
extra_arguments = [
"--extra-vars", "appliancetype=${source.name}",
"--extra-vars", "ansible_ssh_pass=${var.ssh_password}"//,
// "--extra-vars", "repo_username=${var.repo_username}",
// "--extra-vars", "repo_password=${var.repo_password}"
"--extra-vars", "applianceversion=${var.appliance_version}",
"--extra-vars", "ansible_ssh_pass=${var.ssh_password}",
"--extra-vars", "docker_username=${var.docker_username}",
"--extra-vars", "docker_password=${var.docker_password}",
"--extra-vars", "repo_username=${var.repo_username}",
"--extra-vars", "repo_password=${var.repo_password}"
]
}
@@ -36,12 +50,12 @@ build {
inline = [
"pwsh -command \"& scripts/Update-OvfConfiguration.ps1 \\",
" -ApplianceType '${source.name}' \\",
" -OVFFile '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.ovf' \"",
" -OVFFile '/data/scratch/bld_${var.vm_name}_${source.name}.ovf' \"",
"pwsh -file scripts/Update-Manifest.ps1 \\",
" -ManifestFileName '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.mf'",
" -ManifestFileName '/data/scratch/bld_${var.vm_name}_${source.name}.mf'",
"ovftool --acceptAllEulas --allowExtraConfig --overwrite \\",
" '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.ovf' \\",
" /output/airgapped-k8s.${source.name}.ova"
" '/data/scratch/bld_${var.vm_name}_${source.name}.ovf' \\",
" /output/airgapped-k8s-${var.appliance_version}+${var.k8s_version}-${source.name}.ova"
]
}
}

View File

@@ -1,5 +1,5 @@
iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04.1-live-server-amd64.iso"
iso_checksum = "sha256:10F19C5B2B8D6DB711582E0E27F5116296C34FE4B313BA45F9B201A5007056CB"
iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04.3-live-server-amd64.iso"
iso_checksum = "sha256:A4ACFDA10B18DA50E2EC50CCAF860D7F20B389DF8765611142305C0E911D16FD"
// iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04-live-server-amd64.iso"
// iso_checksum = "sha256:84AEAF7823C8C61BAA0AE862D0A06B03409394800000B3235854A6B38EB4856F"
// iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04.1-live-server-amd64.iso"
// iso_checksum = "sha256:10F19C5B2B8D6DB711582E0E27F5116296C34FE4B313BA45F9B201A5007056CB"

View File

@@ -1,10 +1,19 @@
#cloud-config
autoinstall:
version: 1
apt:
geoip: true
preserve_sources_list: false
primary:
- arches: [amd64, i386]
uri: http://archive.ubuntu.com/ubuntu
- arches: [default]
uri: http://ports.ubuntu.com/ubuntu-ports
early-commands:
- sudo systemctl stop ssh
locale: en_US
keyboard:
layout: en
variant: us
layout: us
network:
network:
version: 2
@@ -16,14 +25,18 @@ autoinstall:
layout:
name: direct
identity:
hostname: packer-template
hostname: ubuntu-server
username: ubuntu
# password: $6$ZThRyfmSMh9499ar$KSZus58U/l58Efci0tiJEqDKFCpoy.rv25JjGRv5.iL33AQLTY2aljumkGiDAiX6LsjzVsGTgH85Tx4S.aTfx0
password: $6$rounds=4096$ZKfzRoaQOtc$M.fhOsI0gbLnJcCONXz/YkPfSoefP4i2/PQgzi2xHEi2x9CUhush.3VmYKL0XVr5JhoYvnLfFwqwR/1YYEqZy/
ssh:
install-server: yes
install-server: true
allow-pw: true
packages:
- openssh-server
- open-vm-tools
- cloud-init
user-data:
disable_root: false
late-commands:
- echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
- curtin in-target --target=/target -- chmod 440 /etc/sudoers.d/ubuntu

63
packer/source.pkr.hcl Normal file
View File

@@ -0,0 +1,63 @@
source "vsphere-iso" "ubuntu" {
vcenter_server = var.hv_fqdn
username = var.hv_username
password = var.hv_password
insecure_connection = "true"
datacenter = var.hv_datacenter
cluster = var.hv_cluster
host = var.hv_host
folder = var.hv_folder
datastore = var.hv_datastore
guest_os_type = "ubuntu64Guest"
boot_order = "disk,cdrom"
boot_command = [
"e<down><down><down><end>",
" autoinstall network-config=disabled ds=nocloud;",
"<F10>"
]
boot_wait = "2s"
communicator = "ssh"
ssh_username = "ubuntu"
ssh_password = var.ssh_password
ssh_timeout = "20m"
ssh_handshake_attempts = "100"
ssh_pty = true
CPUs = 4
RAM = 8192
network_adapters {
network = var.hv_network
network_card = "vmxnet3"
}
storage {
disk_size = 76800
disk_thin_provisioned = true
}
disk_controller_type = ["pvscsi"]
usb_controller = ["xhci"]
set_host_for_datastore_uploads = true
cd_files = [
"packer/preseed/UbuntuServer22.04/user-data",
"packer/preseed/UbuntuServer22.04/meta-data"
]
cd_label = "cidata"
iso_url = local.iso_authenticatedurl
iso_checksum = var.iso_checksum
shutdown_command = "echo '${var.ssh_password}' | sudo -S shutdown -P now"
shutdown_timeout = "5m"
remove_cdrom = true
export {
output_directory = "/data/scratch"
}
destroy = true
}

View File

@@ -1,61 +0,0 @@
source "vsphere-iso" "ubuntu" {
vcenter_server = var.vcenter_server
username = var.vsphere_username
password = var.vsphere_password
insecure_connection = "true"
datacenter = var.vsphere_datacenter
cluster = var.vsphere_cluster
host = var.vsphere_host
folder = var.vsphere_folder
datastore = var.vsphere_datastore
guest_os_type = "ubuntu64Guest"
boot_order = "disk,cdrom"
boot_command = [
"e<down><down><down><end>",
" autoinstall ds=nocloud;",
"<F10>"
]
boot_wait = "2s"
communicator = "ssh"
ssh_username = "ubuntu"
ssh_password = var.ssh_password
ssh_timeout = "20m"
ssh_handshake_attempts = "100"
ssh_pty = true
CPUs = 4
RAM = 8192
network_adapters {
network = var.vsphere_network
network_card = "vmxnet3"
}
storage {
disk_size = 76800
disk_thin_provisioned = true
}
disk_controller_type = ["pvscsi"]
usb_controller = ["xhci"]
cd_files = [
"packer/preseed/UbuntuServer22.04/user-data",
"packer/preseed/UbuntuServer22.04/meta-data"
]
cd_label = "cidata"
iso_url = local.iso_authenticatedurl
iso_checksum = var.iso_checksum
shutdown_command = "echo '${var.ssh_password}' | sudo -S shutdown -P now"
shutdown_timeout = "5m"
remove_cdrom = true
export {
images = false
output_directory = "/scratch/airgapped-k8s"
}
}

View File

@@ -1,17 +1,17 @@
variable "vcenter_server" {}
variable "vsphere_username" {}
variable "vsphere_password" {
variable "hv_fqdn" {}
variable "hv_username" {}
variable "hv_password" {
sensitive = true
}
variable "vsphere_host" {}
variable "vsphere_datacenter" {}
variable "vsphere_cluster" {}
variable "hv_host" {}
variable "hv_datacenter" {}
variable "hv_cluster" {}
variable "vsphere_templatefolder" {}
variable "vsphere_folder" {}
variable "vsphere_datastore" {}
variable "vsphere_network" {}
variable "hv_templatefolder" {}
variable "hv_folder" {}
variable "hv_datastore" {}
variable "hv_network" {}
variable "vm_name" {}
variable "ssh_password" {
@@ -28,3 +28,11 @@ local "iso_authenticatedurl" {
expression = "https://${var.repo_username}:${var.repo_password}@${var.iso_url}"
sensitive = true
}
variable "docker_username" {}
variable "docker_password" {
sensitive = true
}
variable "appliance_version" {}
variable "k8s_version" {}

View File

@@ -1,9 +1,10 @@
vcenter_server = "bv11-vc.bessems.lan"
vsphere_username = "administrator@vsphere.local"
vsphere_datacenter = "DeSchakel"
vsphere_cluster = "Cluster.Legacy"
vsphere_host = "bv11-esx.bessems.lan"
vsphere_datastore = "ESX00.SSD01"
vsphere_folder = "/Packer"
vsphere_templatefolder = "/Templates"
vsphere_network = "LAN"
hv_fqdn = "lab-vc-01.bessems.lan"
hv_username = "administrator@vsphere.local"
# urlencoded "4/55-Clydebank-Rd"
hv_datacenter = "4%2f55-Clydebank-Rd"
hv_cluster = "Cluster.01"
hv_host = "lab-esx-02.bessems.lan"
hv_datastore = "ESX02.SSD02"
hv_folder = "/Packer"
hv_templatefolder = "/Templates"
hv_network = "LAN"

View File

@@ -1,12 +1,16 @@
DeploymentConfigurations:
- Id: cp1w1
- Id: cp1w1ws0
Label: 'Workload-cluster: 1 control-plane node/1 worker node'
Description: 1 control-plane node/1 worker node
- Id: cp1w2
Label: 'Workload-cluster: 1 control-plane node/2 worker nodes'
Description: 1 control-plane node/2 worker nodes
- Id: cp1w1ws1
Label: 'Workload-cluster: 1 control-plane node/1 worker node/1 worker-storage node'
Description: 1 control-plane node/1 worker node/1 worker-storage node
- Id: core
Label: No workload-cluster
Description: Only the metacluster is deployed (useful for recovery scenario's)
DynamicDisks:
@@ -24,8 +28,9 @@ PropertyCategories:
- Key: deployment.type
Type: string
Value:
- cp1w1
- cp1w2
- cp1w1ws0
- cp1w1ws1
- core
UserConfigurable: false
- Name: 1) Meta-cluster
@@ -41,8 +46,8 @@ PropertyCategories:
- key: metacluster.vip
Type: ip
Label: Meta-cluster virtual IP*
Description: Meta-cluster control plane endpoint virtual IP
Label: Meta-cluster virtual IP address*
Description: Meta-cluster control plane endpoint virtual IP address
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
@@ -101,18 +106,18 @@ PropertyCategories:
- Key: guestinfo.gateway
Type: ip
Label: Gateway*
Description: ''
Description: 'A default route is *required*, use a dummy IP address if there is no actual gateway router present'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ntpserver
Type: string(1..)
Label: Time server*
Description: A comma-separated list of timeservers
DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
Configurations: '*'
UserConfigurable: true
# - Key: guestinfo.ntpserver
# Type: string(1..)
# Label: Time server*
# Description: A comma-separated list of timeservers
# DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
# Configurations: '*'
# UserConfigurable: true
- Name: 3) Workload-cluster
ProductProperties:
@@ -122,31 +127,75 @@ PropertyCategories:
Label: Workload-cluster name*
Description: ''
DefaultValue: 'workload-{{ hostname.suffix }}'
Configurations: '*'
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: workloadcluster.vip
Type: ip
Label: Workload-cluster virtual IP*
Description: Workload-cluster control plane endpoint virtual IP
DefaultValue: ''
Configurations: '*'
Label: Workload-cluster virtual IP address*
Description: Workload-cluster control plane endpoint virtual IP address
DefaultValue: '0.0.0.0'
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: ippool.startip
Type: ip
Label: Workload-cluster IP-pool start IP*
Label: Workload-cluster IP-pool start IP address*
Description: All nodes for the workload-cluster will be provisioned within this IP pool
DefaultValue: ''
Configurations: '*'
DefaultValue: '0.0.0.0'
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: ippool.endip
Type: ip
Label: Workload-cluster IP-pool end IP*
Label: Workload-cluster IP-pool end IP address*
Description: All nodes for the workload-cluster will be provisioned within this IP pool
DefaultValue: ''
Configurations: '*'
DefaultValue: '0.0.0.0'
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: workloadcluster.nodetemplate
Type: string["ubuntu-2204-kube-v1.30.0", "photon-5-kube-v1.30.0.ova"]
Label: Workload-cluster node template
Description: |
All worker and worker-storage nodes for the workload-cluster will be provisioned with this node template.
Note:
Make sure that this exact template has been uploaded to the vCenter instance before powering on this appliance!
DefaultValue: ubuntu-2204-kube-v1.30.0
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: workloadcluster.nodesize
Type: string["small", "medium", "large"]
Label: Workload-cluster node size*
Description: |
All worker and worker-storage nodes for the workload-cluster will be provisioned with number of cpu-cores and memory as specified:
- SMALL: 2 vCPU/6GB RAM
- MEDIUM: 4 vCPU/8GB RAM
- LARGE: 8 vCPU/16GB RAM
DefaultValue: 'small'
Configurations:
- cp1w1ws0
- cp1w1ws1
UserConfigurable: true
- Key: workloadcluster.additionaldisk
Type: int(0..120)
Label: Workload-cluster block storage disk size*
Description: 'All worker-storage nodes for the workload-cluster will be provisioned with an additional disk of the specified size'
DefaultValue: '42'
Configurations:
- cp1w1ws1
UserConfigurable: true
- Name: 4) Common
@@ -187,6 +236,33 @@ PropertyCategories:
Configurations: '*'
UserConfigurable: true
- Name: 6) Identity provider
ProductProperties:
- Key: ldap.fqdn
Type: string(1..)
Label: LDAP server FQDN/IP-address*
Description: The address of the LDAP server which this bootstrap appliance will perform LDAP queries against.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: ldap.dn
Type: string(1..)
Label: LDAP bind distinguished name*
Description: The distinguished name of the user account used for LDAP queries; for example 'CN=ldapreader,OU=Useraccounts,DC=example,DC=com'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: ldap.password
Type: password(1..)
Label: LDAP bind password*
Description: The password of the user account used for LDAP queries.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
---
Variables:
- Name: hostname.suffix

View File

@@ -22,8 +22,8 @@ PropertyCategories:
- key: metacluster.vip
Type: ip
Label: Meta-cluster virtual IP*
Description: Meta-cluster control plane endpoint virtual IP
Label: Meta-cluster virtual IP address*
Description: Meta-cluster control plane endpoint virtual IP address
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
@@ -31,7 +31,7 @@ PropertyCategories:
- Key: metacluster.password
Type: password(7..)
Label: Meta-cluster administrator password*
Description: 'Needed to authenticate with target meta-cluster'
Description: Needed to authenticate with target meta-cluster
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
@@ -44,7 +44,7 @@ PropertyCategories:
Configurations: '*'
UserConfigurable: true
- Name: 2) Add meta-cluster node
- Name: 2) Meta-cluster new node
ProductProperties:
- Key: guestinfo.hostname
@@ -87,15 +87,28 @@ PropertyCategories:
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ntpserver
Type: string(1..)
Label: Time server*
Description: A comma-separated list of timeservers
DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
Configurations: '*'
# - Key: guestinfo.ntpserver
# Type: string(1..)
# Label: Time server*
# Description: A comma-separated list of timeservers
# DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
# Configurations: '*'
# UserConfigurable: true
- Name: 3) Workload-cluster
ProductProperties:
- Key: workloadcluster.nodetemplate
Type: string["ubuntu-2204-kube-v1.30.0", "photon-5-kube-v1.30.0.ova"]
Label: Workload-cluster node template
Description: |
All worker and worker-storage nodes for the workload-cluster will be provisioned with this node template.
Note:
Make sure that this exact template has been uploaded to the vCenter instance before powering on this appliance!
DefaultValue: ubuntu-2204-kube-v1.30.0
UserConfigurable: true
- Name: 3) Common
- Name: 4) Common
ProductProperties:
- Key: guestinfo.rootsshkey
@@ -106,7 +119,7 @@ PropertyCategories:
Configurations: '*'
UserConfigurable: true
- Name: 4) Hypervisor
- Name: 5) Hypervisor
ProductProperties:
- Key: hv.fqdn