346 Commits

Author SHA1 Message Date
dd802e0620 Remove debugging;Sanitize hypervisor username;Traefik /data volume permission fix #2;Specify kubeconfig x3
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-24 16:41:18 +01:00
17cf7925d6 Traefik /data volume permission fix
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-24 10:41:22 +01:00
2b81d4caa0 Enable traefik persistence
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-23 16:45:04 +01:00
a4e243e882 Increase volumes;Move template
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-23 14:56:19 +01:00
2531a4fb5d Add preflight retries;Housekeeping;Upload&Import images;Fix var reference;Improve UX
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-23 09:12:35 +01:00
ca51303602 Change ImagePullPolicy;Update tty console message;Sanitize user input;Add missing vapp property
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-22 15:08:12 +01:00
e7d89006fc Fix var reference;Fix task order;Fix block scalar
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-22 11:44:41 +01:00
531ead868a Fix var reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-21 16:58:51 +01:00
3dec92a955 Fix typo
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-21 16:22:24 +01:00
d67bf86dab Test ansible output regression workaround #2;Refactor vapp properties;Add kube-vip dependency;Refactor netplan;Download/Trust root CA
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-21 16:12:11 +01:00
43d83e8e31 Move files between payload folders;Define upgrade vapp properties;Join metacluster
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-20 13:23:34 +01:00
1428fe73f7 Fix path;Debug packer contextual vars
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-19 21:15:48 +01:00
cd308d116b Fix quote
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-19 16:08:20 +01:00
d915c21e44 Upgrade base image;Unpin ansible-core;Attempt different packer syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-19 15:24:23 +01:00
89532ff7fb Remove duplicate block definition
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-19 13:38:30 +01:00
1f7fb31afe Fix quotes
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-19 13:36:33 +01:00
52fbc561dd Define install token;Change artifact path
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-19 13:30:13 +01:00
849c86b22b Test ansible output regression workaround;Fix filename
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-18 18:42:21 +01:00
c1bff94cd1 Parallel build of bootstrap/upgrade ova;Split ansible tasks respectively
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-18 15:09:32 +01:00
8ba8b5aaab Revert CAPV image
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-18 11:31:26 +01:00
f2dadb3e47 Move readycheck;Fix template value
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-17 15:08:20 +01:00
8e2df51993 Add retries
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-17 12:21:40 +01:00
2a9841fb0c Fix hypervisor settings
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-17 11:49:16 +01:00
c2219a5ddc Upgrade CAPV images;Add readycheck
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-17 11:45:25 +01:00
336150b00c Fix typo;Fix module;Register workloadcluster in argocd #2;Reduce tty refresh frequency;Upgrade component
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-13 09:03:35 +01:00
d1b1635942 Avoid ansible-galaxy timeouts
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-07 12:14:00 +01:00
62660c8d6c Switch tty message to systemd service;Add missing kubeconfig;Refactor tty script
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-07 11:58:58 +01:00
2a5a154df0 Fix kubeconfig source;(WIP)Register workloadcluster in argocd
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-06 16:27:33 +01:00
36e3a2b99f Add cluster api readycheck;Reorder tasks
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-06 13:34:26 +01:00
07b61d8bf3 Temporarily disable reboot
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-06 10:59:59 +01:00
3710f97b38 Fix var reference
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-05 17:21:40 +01:00
6c3867fb57 Fix linting error;Add workload cluster generation/configuration
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-05 16:42:20 +01:00
edc19464e2 Revert readycheck for step-ca;Revert retries;DRY;Upgrade components;Fix syntax
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-05 13:48:47 +01:00
85dcbb73a4 Increase retry limit for helm charts
Some checks failed
continuous-integration/drone/push Build is failing
2023-01-05 10:01:03 +01:00
9e63e243b8 Add missing vApp property;Fix preflight check;Add missing folder;Revert Longhorn&K3s version
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-04 15:36:20 +01:00
d343b84b30 Add preflight check;Refactor readychecks;Quote input variables;Fix kustomization template;Apply kustomization;Generate new cluster-api manifest
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-04 13:22:35 +01:00
31a91d826f Refactor readycheck
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-03 19:31:39 +01:00
3f24a4af1a Add graceful shutdown configuration
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-03 11:28:32 +01:00
dc4fa31070 Remove component;Disable restart;Force overwrite of network protocol profile;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-03 11:09:51 +01:00
d91acb9c0d Fix/Optimize kustomization template;Simplify dictionary
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-02 21:20:08 +01:00
9c6e1ff386 Housekeeping;Apply cluster-template kustomization
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-02 14:39:01 +01:00
f03e0c3bda Copy kubeadm images to separate project 2023-01-02 09:13:56 +01:00
deb524d1f5 Pin version for kubeadm container images
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-01 13:39:23 +01:00
3674862ff4 Update 'ansible/roles/assets/tasks/containerimages.yml'
All checks were successful
continuous-integration/drone/push Build is passing
2023-01-01 02:33:58 +00:00
08a543e27f Add debugging;Fix readiness check;Create kubeadm registry project
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-31 17:15:13 +01:00
0fd4cbb92f Inject kubeadm container images
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-31 13:11:33 +01:00
6c31038b1a Upgrade versions; Attempt to add more robust wait/retry for helm chart installation
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-30 18:05:30 +01:00
fdead3c41c Fix foldername;Add semi-recursive yaml parse commands
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-30 11:14:49 +01:00
3efee69602 Refactor network protocol profile;Update container image references to local registry;Update/Remove dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-29 12:31:27 +01:00
0c8272c9e4 Add dependency container image;(WIP)Update image reference
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-27 14:00:10 +01:00
fcb05e2e12 Improve user experience;Upgrade dependency
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-26 17:01:05 +01:00
6de8ea3029 Fix vApp property value
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-24 15:31:37 +01:00
4b5acf5e95 Add retries;Increase ova cpu sizing;Replace deployment options
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-23 21:49:15 +01:00
5c68b87d67 Upgrade dependency;Fix syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-23 12:51:10 +01:00
e538fce0f8 Upgrade dependency
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-22 19:49:56 +01:00
b67bc43a70 Refactor vApp properties;Add dependency
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-12-22 19:44:33 +01:00
78c1d7fb54 Housekeeping;
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-15 22:17:07 +01:00
ab5f082933 Define registry mirrors dynamically;Fix path;Fix Ansible config
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-10 16:56:13 +01:00
e3f44fab0a Add missing folder
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-04 12:23:18 +01:00
f68189e3b7 Include IPAM in-cluster provider
Some checks failed
continuous-integration/drone/push Build is failing
2022-12-04 11:22:17 +01:00
f3224416cb Reenable key;Upgrade ClusterAPI
All checks were successful
continuous-integration/drone/push Build is passing
2022-12-01 13:45:37 +01:00
96ee96b470 Ensure API availability
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-29 14:48:19 +01:00
a20cb4417c Configure Ansible defaults
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-29 14:32:44 +01:00
433a231487 Improve feedback;Housekeeping;Downgrade K3s version
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-29 13:02:06 +01:00
b890a03760 Housekeeping;Improved feedback;Prevent duplicates
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-29 11:24:26 +01:00
92eee0744e Attempt to simplify/aggregate dicts
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-28 16:58:56 +01:00
83ce1be8bf Fix var reference;Upgrade all included components
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-28 16:22:15 +01:00
a364a7c359 Use interface autodetection;Skip TLS Verify
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-28 13:29:56 +01:00
edca98549c Add explicit version;Add cni plugin;Add vApp properties to node template
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-28 10:25:25 +01:00
146887e9e1 Fix linting errors
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-25 09:50:55 +01:00
9bfa6bf658 Fix API ready check
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-25 09:29:28 +01:00
b9575d2de4 Ensure API readiness
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-24 13:46:19 +01:00
e249498109 Fix paths 2022-11-24 12:39:28 +01:00
01a1c0bf0a Fix typo
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-24 11:22:13 +01:00
c27712bc20 Simplify playbook roles
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-24 10:59:41 +01:00
2d99511360 Upgrade cluster api and dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-24 10:04:46 +01:00
46a927b777 Fix script logic;Remove source files
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-23 13:30:32 +01:00
c9a8598a35 Move tarball compression to background service;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-23 10:25:35 +01:00
0d7b1ab269 Debug taints #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-21 12:37:12 +01:00
4786507366 Debug taints
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-18 17:54:13 +01:00
ed4c1145b6 Fix unique id in loop #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-17 12:02:27 +01:00
3aa44e0f83 Add retries #2
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-17 11:21:19 +01:00
3a48676ca7 Add retries
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-17 10:51:27 +01:00
44daf9191a Revert dict attribute;Move template;Add missing template attribute;Fix unique id in loop
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-17 10:17:17 +01:00
83ee632ff9 Move task;Fix static value;Improve shell logic/ansible filter;Fix typo
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-17 09:22:58 +01:00
be562f0124 Add unique postfix to tarball
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-17 08:35:00 +01:00
f711908ddb Upgrade skopeo
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-14 12:37:24 +01:00
6f7b8e99f2 Add retries
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-14 09:36:13 +01:00
6261bfdda7 Add retries for container import; Add cluster API images
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-12 11:03:22 +01:00
5751e7200c Add missing kubeconfig path;Reorder cleanup tasks
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-10 21:27:17 +01:00
6ce1a66d3e Avoid latest Ansible version
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-10 15:03:11 +01:00
85fb68b2e0 Fix template name;Fix interface name
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-09 16:59:43 +01:00
b47dda7a50 Enable gather facts; Fix typo and conditional;Add missing manifests;Initialize cluster API
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-09 16:43:49 +01:00
838c7b6361 Add pause to loop iteration; Replace wrong variable reference
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-09 15:35:42 +01:00
2ca91b5dea Reorder vApp properties
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-09 11:40:27 +01:00
42d79a95ca Merge vars_files
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-08 20:17:36 +01:00
1f4bbca7ec Add missing key to dict #2
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-08 20:11:31 +01:00
fe8700f300 Housekeeping;Add missing key to dict
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-08 19:46:37 +01:00
ac4b011e83 Add vApp property
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-08 15:48:00 +01:00
7ca9d20b65 Write template during firstboot
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-08 14:43:40 +01:00
0f79832d96 Change syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-08 11:53:07 +01:00
8926b72344 Rebase node templates;Switch to linked clones;Rename dictionary;Add debugging
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-08 08:54:33 +01:00
5985615868 Fix url;Disable debugging
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-07 16:09:14 +01:00
4655fe7465 Allow multiple results;Add debugging
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-07 14:06:34 +01:00
15622d8a21 Upgrade binary #2
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 13:29:00 +01:00
5c75452315 Add fileflob filter; Housekeeping;Add dependency;Upgrade binary
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-11-07 13:28:44 +01:00
f27dea92e3 Filter empty results;Remove debugging
All checks were successful
continuous-integration/drone/push Build is passing
2022-11-07 09:03:49 +01:00
67ec47b2b3 Debug manifest parsing
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 08:40:19 +01:00
c6170436f4 Filter undefined items
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 07:38:01 +01:00
5b4eb2e443 Update 'ansible/roles/metacluster/tasks/components.yml'
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 03:18:20 +00:00
68a1534d9c Update 'ansible/roles/metacluster/tasks/components.yml'
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-11-07 03:16:49 +00:00
e656780f56 Fix template
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 03:36:01 +01:00
464ed497fe Move config to firstboot;Split yaml;Improve feedback
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 03:11:59 +01:00
eb46c384a8 Fix label
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 02:39:51 +01:00
78526527bb Remove filter;Improve feedback
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-11-07 02:18:54 +01:00
22c06e2388 Fix loop var;Fix template vars
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-07 01:57:30 +01:00
09c4a17050 Fix url's
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-06 20:41:02 +01:00
35ced380a4 Fix variable names #2; Update clusterctl config template
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-06 20:04:28 +01:00
9199809da5 Fix variable names;Add clusterctl config template
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-06 14:21:35 +01:00
806bf24fc0 Pin version;Store manifests in folder structure
Some checks failed
continuous-integration/drone/push Build is failing
2022-11-06 13:23:14 +01:00
4733d8e5f8 Fix variable references
All checks were successful
continuous-integration/drone/push Build is passing
2022-10-17 21:30:06 +02:00
65346db100 Debug variables
All checks were successful
continuous-integration/drone/push Build is passing
2022-10-17 21:03:13 +02:00
8eb85e4b11 Fix syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-10-17 16:56:24 +02:00
716818c23c Avoid lookup
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-17 14:28:59 +02:00
f4d32d7828 Test join syntax
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-17 13:47:32 +02:00
c44b40568d Fix module
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-17 12:06:26 +02:00
67676ff03b Download ClusterAPI assets
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-17 11:13:13 +02:00
204faa7415 Upgrade binary #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-10-10 16:36:48 +02:00
35a5656c59 Configure 'needrestart' package 2022-10-10 16:05:48 +02:00
3fe83457ea Fix url 2022-10-10 16:03:57 +02:00
3c11ce5dde Upgrade binary
Some checks failed
continuous-integration/drone/push Build is failing
2022-10-10 15:58:46 +02:00
7a1b563851 Add clusterapi prereqs
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-19 13:15:09 +02:00
0bddae0440 Move filter_plugin folder;Improve feedback;Add missing attributes
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-07 09:46:36 +02:00
8181ae4017 Add missing dependency
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-06 16:37:26 +02:00
ac3f162dd4 Fix linting error
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-06 13:41:51 +02:00
aa5c45e6e6 Add key/value pair to configmap
Some checks failed
continuous-integration/drone/push Build is failing
2022-09-06 13:39:35 +02:00
a67ef0e1bd Divide hypervisor/vapp details over secret/configmap;Add filter plugin;Retain newlines in template;Add vApp properties
Some checks failed
continuous-integration/drone/push Build is failing
2022-09-06 13:34:39 +02:00
1794b24998 Fix labels;Fix feedback
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-05 08:39:18 +02:00
7c7333690d Fix linting warning;Add annotations
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-04 21:41:49 +02:00
7b17b8ad63 Sort fileglob loops;Fix filter parameter;Remove redundant key;Fix multiline key/value pairs;Add helm-adopt labels
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-04 14:51:07 +02:00
1141225907 Install SealedSecrets;Store hypervisor credentials in secret
All checks were successful
continuous-integration/drone/push Build is passing
2022-09-03 17:44:44 +02:00
6c4fe7a0e6 Improve feedback;Fix Gitea config;Fix argocd config
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-31 12:04:53 +02:00
8d13b527be Store certificate in configmap/secret dynamically;Remove helmchart values
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-30 21:14:51 +02:00
d8299ee90c Fix yaml;Fix volumemount;Fix filename
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-30 18:11:02 +02:00
b34ac733f4 Add missing task
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-30 14:52:43 +02:00
9f2e6ee160 Change gitea config;Remove image compression logic;Switch to template;Reenable/Move workaround
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-30 14:39:01 +02:00
042b9eb36f Fix filename/keyname;Disable jinja trim_blocks
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-29 22:43:26 +02:00
2097dec958 Disable tags
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-29 19:26:17 +02:00
0c1fca9643 Fix readycheck;Create namespaces explicitly
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-29 14:43:26 +02:00
b0dad1caf7 Remove redundant tasks;Fix health check;Add gitea config
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-29 08:51:33 +02:00
2cd2c4c6d0 Fix typo;Fix readycheck;Add argocd applicationset
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-28 20:10:08 +02:00
bd0b74ba19 Merge branch 'Kubernetes.Bootstrap.Appliance' of https://code.spamasaurus.com/djpbessems/Packer.Images into Kubernetes.Bootstrap.Appliance
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-28 10:00:59 +02:00
521b323de2 Add retries 2022-08-28 10:00:58 +02:00
35b3d5d3b9 Refine task order w/ tags;Fix API check
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-28 09:07:17 +02:00
675dce4160 Split up tasklist;Revert namespace;Distribute root cert
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-27 21:10:51 +02:00
bd7c1f92e8 Set traefik cert duration
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-26 11:31:12 +02:00
7d837a1711 Fix indentation
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-26 08:29:55 +02:00
e1b57cfdea Fix configmap name
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-25 16:06:29 +02:00
84d644db67 Fix namespace #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-25 13:11:33 +02:00
5cffb61544 Fix indentation
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-25 13:08:35 +02:00
1083937d5d Fix configmap namespace
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-25 12:56:37 +02:00
e52c63f80c Fix ingress namespace
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-25 12:51:29 +02:00
c7579ea4a6 Fix endpoint
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-25 12:06:56 +02:00
fba2e3e4b1 Disable http challenge;Inject stepca cert;Set default certresolver
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-08-25 12:04:51 +02:00
1c43bb19d2 Add acme provisioner;Force system certs update
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-25 08:22:28 +02:00
9a3898e0b8 Retrieve step-ca more reliably;Configure step-ca admin credentials
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-24 17:44:30 +02:00
a3da5b8f93 Migrate from helm-controlled ingress to passthrough ingressRoute
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-24 11:21:51 +02:00
5f02ddab49 Add default value 2022-08-23 14:38:03 +02:00
585e39cb97 Disable Harbor tls (rely on Traefik);Configure Traefik with custom certResolver;Retrieve & install root ca in truststore
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-23 14:31:53 +02:00
1cd7e1510f Configure CA w/ ingress
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-23 12:37:38 +02:00
0534b031fa Handle duplicate images;Add registry endpoint
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-22 14:54:54 +02:00
c8509aa3d5 Fix keyname
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-22 14:28:06 +02:00
158af986c3 Add quotes
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-22 13:46:43 +02:00
5e537966f6 Debug Ansible issue
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-22 13:20:03 +02:00
3849b79493 Add step-ca component
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-22 12:52:47 +02:00
5f1d1bfa8a Change order (test timing of handler)
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-19 12:55:53 +02:00
fe306bd845 Allow handler to fail (timing issue helm charts)
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-18 12:54:16 +02:00
ccbd4ed984 Fix task order;Add default hostname value
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-18 12:44:08 +02:00
fdc5c44e6a Remove handler from non-firstboot steps;Fix kubeconfig order/logic
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-17 08:32:35 +02:00
c57291af6d Force apply manifests w/ handler;Add dependency
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-16 15:16:20 +02:00
d652cf0346 Configure ArgoCD declaratively
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-15 14:43:55 +02:00
a1b8837cc5 Add more memory
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-09 15:56:50 +02:00
5b7b93dd30 Increase diskspace
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-09 14:56:30 +02:00
d89ccd57da Revert compression changes
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-09 13:29:19 +02:00
53da641926 Test xz compression
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-09 12:01:39 +02:00
a3e9bc659a Add powercli container
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-09 11:57:31 +02:00
fc51bf3f94 Optimize node template handling
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-04 08:55:50 +02:00
3d96f8c13b Add missing dependency #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-03 15:16:24 +02:00
488fe10e1e Add missing dependency
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-03 14:56:55 +02:00
01e168a7f9 Refactor package installation
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-03 14:39:36 +02:00
c48f27c42e Rebase pip packages
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-03 13:53:54 +02:00
185b332764 Fix linting error
All checks were successful
continuous-integration/drone/push Build is passing
2022-08-03 09:23:27 +02:00
e89505bef6 Handle existing templates
Some checks failed
continuous-integration/drone/push Build is failing
2022-08-03 08:53:40 +02:00
b763d2b562 Avoid uncaught exception
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-31 22:29:47 +02:00
d607f615e9 Remove redundant quotes
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-31 18:21:18 +02:00
ed7a474dbb Update vApp properties;Rebase static binary;Refactor dictionary;Combine similar steps;Housekeeping
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-31 18:17:13 +02:00
14c6720196 Fix typo
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-29 11:39:57 +02:00
277c91eeba Fix module name;Add indentation;Update dependencies #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-29 11:39:39 +02:00
c9f3c648b7 Fix vApp property type;Include missing role;Update dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-29 10:56:34 +02:00
8e680c45be Housekeeping;Provision node templates;Add vApp properties
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-28 23:22:41 +02:00
7440d5824c Add missing parameter
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-27 14:23:33 +02:00
8e75925b52 Rebase dependency;Comment out redundant logic
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-27 13:33:04 +02:00
d6234321d9 Add dependencies
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-27 12:51:18 +02:00
ea20b1290c Add initial steps for workload cluster staging;Include govc (temporarily)
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-26 16:50:50 +02:00
d986d5ab25 Debug existing config map;Fix key;Fix tty mess of typos;Fix git push
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-26 10:08:35 +02:00
c29728594c Ignore errors for debugging
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-25 16:51:00 +02:00
10d5d6f389 Fix linting errors
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-22 09:44:35 +02:00
96dccef450 Housekeeping;Add tty console message
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-22 09:38:03 +02:00
05e0c50217 Debugging;Housekeeping;Push source gitops repository
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-20 20:50:39 +02:00
261e91ee2e Create additional SSH-keypair;Configure gitea
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-20 08:40:18 +02:00
1746af9b9d Fix variable references
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-18 14:07:12 +02:00
7c2ff54019 Fix helm chart ref
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-18 12:38:55 +02:00
193ce9a534 Move manifest injection to firstboot;Add SealedSecrets;Replace traefik dashboard
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-18 12:09:54 +02:00
9e91bef7b7 Disable phone-home;Add SealedSecrets;Flatten list
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-15 14:39:33 +02:00
05b30287bd Fix label;Configure gitea SSH;Fix git folder
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-15 14:02:27 +02:00
54caff8fb6 Add conditional;Inject manifests
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-15 12:14:12 +02:00
2f976898eb Remove redundant key/value;Add debugging
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-14 22:45:35 +02:00
44befeda4b Increase cpu sizing;Change default value;Fix filename;Fix endpoint;Add dependency;Fix filemode
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-14 14:54:04 +02:00
81847d3b93 Interact with argocd API
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-14 11:04:35 +02:00
39cc83ac57 Add missing key;Add traefik ssh entrypoint
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-14 10:33:26 +02:00
0263b2dfc4 Add (unversioned) clone of metacluster git repo 2022-07-13 12:17:46 +02:00
e7e3b69d95 Change block syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-13 12:05:08 +02:00
b6ac086a31 Add conditional to K3s installation;Populate Gitea #2
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-13 12:00:03 +02:00
20ce62fb6d Enable offlineMode for gitea;Cleanup comments;Populate /etc/hosts
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-12 14:20:43 +02:00
0918eb36fe Fix parenthesis
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-11 13:37:55 +02:00
93e7d4dc9b Fix invalid var name
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-11 13:21:49 +02:00
9a0a33816c Add dummy file to preserve empty dir
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-11 10:18:33 +02:00
eeb1364f1b Refactor templating #3
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-11 09:56:02 +02:00
f04095db8c Refactor templating #42
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-11 09:24:04 +02:00
2847542976 Add vars_file reference
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-10 13:47:53 +02:00
929186d123 Revert var references
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-10 10:10:36 +02:00
d6c885240a Remove debugging;Revert default quotes;Test dynamic helm chart values
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-10 09:54:41 +02:00
0b97ae2fc5 Write whole dict to file
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-09 23:37:57 +02:00
abacbf90ce Refine templating #5
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-09 12:35:29 +02:00
243cf426d7 Add missing jinja delimiters
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-09 11:08:11 +02:00
64b7ea45c0 Fix linting error
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-09 09:26:13 +02:00
3a2dbe572e Refactor templating logic
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-09 09:18:13 +02:00
2be42989e5 Refine templating #4;Update sizing
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-07 20:16:18 +02:00
cb97703406 Refine templating #3
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-07 17:35:31 +02:00
16df0b65fc Disable TLS verify 2022-07-07 09:57:56 +02:00
847b255e3b Refine templating #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-06 17:03:48 +02:00
d005697438 Change ansible module
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-06 16:32:39 +02:00
ab010643df Revert blob storage test
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-06 16:09:21 +02:00
e17cd1b633 Test optimizing skopeo blob storage
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-06 15:28:13 +02:00
504764af10 Fix filename templating
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-06 14:52:25 +02:00
405fb5938f Force creation of new tarball
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-06 14:36:30 +02:00
77e0f7b7cb Refine templating
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-06 11:55:29 +02:00
7fb0e80537 Configure ArgoCD w/ password;Add bcrypt dependency
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-05 16:09:02 +02:00
fb8b9b735f Disable tls for argocd
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-05 15:52:17 +02:00
86e99b1515 Install argo-cd;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-05 09:56:56 +02:00
952e92082f Fix var reference
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-04 18:00:57 +02:00
33e0220e34 Test making dd play nice 2022-07-04 17:57:56 +02:00
a51d922f00 Add marker key
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-04 17:35:02 +02:00
b441717ee1 Add retries to image downloads #2
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-04 17:10:15 +02:00
b7da591571 Add retries to image downloads 2022-07-04 17:08:46 +02:00
fa0fa30062 Fix ansible module name
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 16:45:09 +02:00
dd4e79901e Invert jinja delimiter declaration
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 16:07:30 +02:00
1fcaa4b212 Remove redundant/wrong scalar blocks
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 15:48:48 +02:00
164fd15c60 Prevent parsing of jinja delimiters;Revert exotic syntax
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 15:47:24 +02:00
241551169a Change syntax wrt raw jinja strings #3
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 15:34:52 +02:00
a2b20f49cc Add missing chart value key
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 15:21:47 +02:00
fe8765ded7 Different syntax to allow raw jinja strings
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 15:12:27 +02:00
caf45b5270 Fix quotes 2022-07-04 14:55:07 +02:00
9ba2df08cd Fix linting error 2022-07-04 14:15:08 +02:00
9458f49744 Avoid invalid yaml w/ jinja syntax
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 14:13:30 +02:00
369aaaa0b5 Fix typo in dynamic disk 2022-07-04 14:02:55 +02:00
6220e2a9aa Add chart values to var_file;Add default null
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 13:49:37 +02:00
e3e46bae7d Test injecting dictionaries into yaml file
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-04 13:26:35 +02:00
6e6e7900da Update gitea chart values;Add registry mirror definitions
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-04 12:39:53 +02:00
6c329a36e9 Move payload file
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-03 22:32:17 +02:00
f33f2912f1 Fix linting error
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-03 15:19:41 +02:00
9541942c23 Install gitea chart;Add tea cli binary
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-03 15:08:57 +02:00
2f937aded7 Rename vapp property;Configure node for private registry 2022-07-03 14:52:01 +02:00
95dea97382 Fix skopeo copy syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-01 13:10:26 +02:00
a840306245 Change hypervisor cluster 2022-07-01 11:13:29 +02:00
bbd103d527 Fix scalar block syntax
All checks were successful
continuous-integration/drone/push Build is passing
2022-07-01 10:39:47 +02:00
b2ceee8720 Push images to registry
Some checks failed
continuous-integration/drone/push Build is failing
2022-07-01 10:32:58 +02:00
d5c886f02b Fix Harbor config;Add extra container images
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-30 16:21:19 +02:00
1d59cd4b3c Configure Harbor;Disable tarball deletion
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-30 11:20:39 +02:00
f2d9147291 Fix syntax error
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-30 08:03:13 +02:00
bc9f1c260f Reconfigure Longhorn/Harbor
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-29 17:06:23 +02:00
368f84769b Reenable image handling;Configure Longhorn/Harbor
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-29 13:07:34 +02:00
51366476cc Fix linting error
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-29 11:31:13 +02:00
dcbaf6b807 Create/mount logical volume;Add lvm2 dependency
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-29 11:29:22 +02:00
5dfc3a7813 Redirect crontab output 2022-06-29 09:58:10 +02:00
0989d0c586 Remove debugging;Add missing collection
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-29 09:27:03 +02:00
8a83f47572 Redirect error output;Add debugging;Housekeeping
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-29 09:06:13 +02:00
a3bbf88ce3 Rename file 2022-06-29 08:54:36 +02:00
5e0cebf733 Fix linting error 2022-06-29 08:00:02 +02:00
00e3266360 Test dynamic disk;Disable containerimages temporarily
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-29 07:59:17 +02:00
c6a8f9f7bd Fix linting errors
Some checks reported errors
continuous-integration/drone/push Build was killed
2022-06-28 17:11:11 +02:00
4f1231f973 Set longhorn defaults
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-28 17:10:24 +02:00
049bedbd8f Mount dynamic disk
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-28 16:37:54 +02:00
0e7cfa0934 Add dynamic disk;Add kubectl tab completion
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-28 15:46:55 +02:00
5435f73402 Disable local-path storageclass
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-28 12:02:12 +02:00
6917e0799a Add missing kubeconfig key
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-28 11:53:17 +02:00
4616b9b070 Fix typo
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-28 09:37:49 +02:00
8c741dc120 Fix parse logic
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 22:50:12 +02:00
8cbfcb016b Remove debugging; Cleanup redundant logic;Add vapp property
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 20:26:09 +02:00
4ba7b590ba Debugging & revert override logic
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 19:43:07 +02:00
52660e1414 Fix var reference
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 17:05:30 +02:00
0ab6aaeaa5 Fix foldername 2022-06-27 16:57:14 +02:00
02c26b2465 Scale down cpu/ram 2022-06-27 16:46:27 +02:00
1842a08a39 Add Gitea;Allow override of helm-chart basedir
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 16:32:25 +02:00
0c01f024e9 Increase disksize;Add container image import during firstboot
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 16:02:11 +02:00
40489ff373 Housekeeping #2 2022-06-27 15:34:15 +02:00
c491066384 Housekeeping 2022-06-27 15:33:49 +02:00
4c054cc434 Switch module
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 14:38:48 +02:00
dcbe6c397f Change tarball scope;Try zeroing disk
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 13:55:59 +02:00
cb84a02b6f Readd parse_logic
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 12:27:11 +02:00
8f432d3353 Remove debugging;Housekeeping;Rename dict 2022-06-27 10:55:17 +02:00
1cdbcaccaf Filter invalid results
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 10:28:27 +02:00
f1c6161bcb Revert debugging;Switch ansible module
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-27 09:56:48 +02:00
123518a787 Debugging versions
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 09:54:57 +02:00
2ec6a756b7 Quote whole cli string
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 09:27:33 +02:00
a1779be079 Change yq syntax
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 09:03:43 +02:00
8ed9b2f754 Fix firstboot logic;Refactor helm chart parsing;Housekeeping
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-27 08:44:16 +02:00
72202d9f21 Fix missing parenthesis;Attempt parsing argo-cd chart
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-26 22:30:10 +02:00
9eb5fbd0a3 Fix component name;Temporarily add ignore_errors
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-26 21:35:41 +02:00
c58ede04c4 Add missing galaxy collection;Fix logic to parse charts for container images;Add ArgoCD
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 21:20:16 +02:00
662e8984c3 Fix linting errors; Extend firstboot logic
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 21:01:27 +02:00
b7abf25907 Fix version number;Parse, Pull & Compress container images
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 20:56:45 +02:00
18fa7742fa Add short pause before first provisioner
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 18:48:06 +02:00
f6993c2052 Remove redundant quotes
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 18:32:05 +02:00
59d1730ca5 Update var reference #2
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 18:24:57 +02:00
b087203cfb Update var reference
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 18:06:15 +02:00
d39d594bf0 Reorganize vars dict;Parse & loop through dict key/values
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-26 17:54:19 +02:00
487239365e Remove debugging;Set loop_control label
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-25 23:57:55 +02:00
6ea03d152c Debugging paths
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 23:43:20 +02:00
01991435ae Remove loop redundancy
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 23:32:29 +02:00
a64b5b2325 Fix missing quote
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 21:14:47 +02:00
38d7442025 Remove redundant tasks
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 21:03:42 +02:00
cf91519076 Add jinja filter
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 20:48:15 +02:00
bae044e145 Fix misaligned var references
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 20:23:27 +02:00
f39b4bbb62 Try dynamic logic for archived/compressed/flat static binaries
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 18:44:43 +02:00
9739c51100 Fix var reference
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 08:49:53 +02:00
0df98d4341 Quote special char string
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 08:33:12 +02:00
fc23dc068d Fix var reference;Install packages;Change DHCP identifier to MAC
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 08:28:44 +02:00
4d78d65ad8 Add missing role reference
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 01:13:37 +02:00
c1440d9dcd Add ansible galaxy collection requirements
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 01:08:11 +02:00
4a5f390ae1 Fix linting errors
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 00:54:36 +02:00
e0a5b5a5da Reorganize dependencies/components;Fix folder name
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-25 00:50:44 +02:00
081aaaaa19 Fix/Replace old references;Fix syntaxes
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-24 23:55:54 +02:00
2bd0f8df0a Initial build based on 22.04
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-24 23:44:10 +02:00
2c57dbcddc Fix boot kernel command;Rename files&folders
All checks were successful
continuous-integration/drone/push Build is passing
2022-06-22 11:40:25 +02:00
0f01e803f2 First attempt at 22.04
Some checks failed
continuous-integration/drone/push Build is failing
2022-06-22 11:20:14 +02:00
97 changed files with 3016 additions and 350 deletions

View File

@@ -1,7 +1,7 @@
kind: pipeline kind: pipeline
type: kubernetes type: kubernetes
name: 'Packer Build' name: 'Packer Build'
volumes: volumes:
- name: output - name: output
claim: claim:
@@ -14,29 +14,31 @@ steps:
- name: Debugging information - name: Debugging information
image: bv11-cr01.bessems.eu/library/packer-extended image: bv11-cr01.bessems.eu/library/packer-extended
commands: commands:
- yamllint --version
- packer --version
- ansible --version - ansible --version
- ovftool --version - ovftool --version
- name: Ubuntu Server 20.04 - packer --version
- yamllint --version
- name: Kubernetes Bootstrap Appliance
image: bv11-cr01.bessems.eu/library/packer-extended image: bv11-cr01.bessems.eu/library/packer-extended
pull: always pull: always
commands: commands:
- | - |
sed -i -e "s/<<img-password>>/$${SSH_PASSWORD}/g" \ sed -i -e "s/<<img-password>>/$${SSH_PASSWORD}/g" \
packer/preseed/UbuntuServer20.04/user-data packer/preseed/UbuntuServer22.04/user-data
- | - |
yamllint -d "{extends: relaxed, rules: {line-length: disable}}" \ yamllint -d "{extends: relaxed, rules: {line-length: disable}}" \
ansible \ ansible \
packer/preseed/UbuntuServer20.04/user-data \ packer/preseed/UbuntuServer22.04/user-data \
scripts scripts
- |
ansible-galaxy install \
-r ansible/requirements.yml
- | - |
packer init -upgrade \ packer init -upgrade \
./packer ./packer
- | - |
packer validate \ packer validate \
-var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \ -var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \
-var vm_guestos=ubuntuserver20.04 \
-var repo_username=$${REPO_USERNAME} \ -var repo_username=$${REPO_USERNAME} \
-var repo_password=$${REPO_PASSWORD} \ -var repo_password=$${REPO_PASSWORD} \
-var vsphere_password=$${VSPHERE_PASSWORD} \ -var vsphere_password=$${VSPHERE_PASSWORD} \
@@ -46,7 +48,6 @@ steps:
packer build \ packer build \
-on-error=cleanup -timestamp-ui \ -on-error=cleanup -timestamp-ui \
-var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \ -var vm_name=$DRONE_BUILD_NUMBER-${DRONE_COMMIT_SHA:0:10} \
-var vm_guestos=ubuntuserver20.04 \
-var repo_username=$${REPO_USERNAME} \ -var repo_username=$${REPO_USERNAME} \
-var repo_password=$${REPO_PASSWORD} \ -var repo_password=$${REPO_PASSWORD} \
-var vsphere_password=$${VSPHERE_PASSWORD} \ -var vsphere_password=$${VSPHERE_PASSWORD} \

View File

@@ -1 +1 @@
# Packer.Images [![Build Status](https://ci.spamasaurus.com/api/badges/djpbessems/Packer.Images/status.svg?ref=refs/heads/Windows10)](https://ci.spamasaurus.com/djpbessems/Packer.Images) # Packer.Images [![Build Status](https://ci.spamasaurus.com/api/badges/djpbessems/Packer.Images/status.svg?ref=refs/heads/Kubernetes.Bootstrap.Appliance)](https://ci.spamasaurus.com/djpbessems/Packer.Images)

View File

@@ -1,7 +1,10 @@
--- ---
- hosts: all - hosts: all
gather_facts: false gather_facts: false
vars_files:
- metacluster.yml
become: true become: true
roles: roles:
- os - os
- firstboot - firstboot
- assets

7
ansible/requirements.yml Normal file
View File

@@ -0,0 +1,7 @@
collections:
- name: https://github.com/ansible-collections/ansible.utils
type: git
- name: https://github.com/ansible-collections/community.general
type: git
- name: https://github.com/ansible-collections/kubernetes.core
type: git

View File

@@ -0,0 +1,49 @@
- name: Parse manifests for container images
ansible.builtin.shell:
# This set of commands is necessary to deal with multi-line scalar values
# eg.:
# key: |
# multi-line
# value
cmd: >-
cat {{ item.dest }} | yq --no-doc eval '.. | .image? | select(.)' | awk '!/ /';
cat {{ item.dest }} | yq eval '.data.data' | yq --no-doc eval '.. | .image? | select(.)';
cat {{ item.dest }} | yq --no-doc eval '.. | .files? | with_entries(select(.value.path == "*.yaml")).[0].content' | awk '!/null/' | yq eval '.. | .image? | select(.)'
register: parsedmanifests
loop: "{{ clusterapi_manifests.results }}"
loop_control:
label: "{{ item.dest | basename }}"
- name: Parse helm charts for container images
ansible.builtin.shell:
cmd: "{{ item.value.helm.parse_logic }}"
chdir: /opt/metacluster/helm-charts/{{ item.key }}
register: chartimages
when: item.value.helm is defined
loop: "{{ lookup('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
- name: Store container images in dicts
ansible.builtin.set_fact:
containerimages_{{ item.source }}: "{{ item.results }}"
loop:
- source: charts
results: "{{ chartimages | json_query('results[*].stdout_lines') | select() | flatten | list }}"
- source: kubeadm
results: "{{ kubeadmimages.stdout_lines }}"
- source: manifests
results: "{{ parsedmanifests | json_query('results[*].stdout_lines') | select() | flatten | list }}"
loop_control:
label: "{{ item.source }}"
- name: Pull and store containerimages
ansible.builtin.shell:
cmd: >-
skopeo copy \
--insecure-policy \
--retry-times=5 \
docker://{{ item }} \
docker-archive:./{{ ( item | regex_findall('[^/:]+'))[-2] }}_{{ lookup('ansible.builtin.password', '/dev/null length=5 chars=ascii_lowercase,digits seed={{ item }}') }}.tar:{{ item }}
chdir: /opt/metacluster/container-images
loop: "{{ (containerimages_charts + containerimages_kubeadm + containerimages_manifests + dependencies.container_images) | flatten | unique | sort }}"

View File

@@ -0,0 +1,31 @@
---
- name: Initialize tempfolder
ansible.builtin.tempfile:
state: directory
register: archive
- name: Download & extract archived static binary
ansible.builtin.unarchive:
src: "{{ item.url }}"
dest: "{{ archive.path }}"
remote_src: yes
extra_opts: "{{ item.extra_opts | default(omit) }}"
register: staticbinary_download
retries: 5
delay: 5
until: staticbinary_download is not failed
- name: Install extracted binary
ansible.builtin.copy:
src: "{{ archive.path }}/{{ item.filename }}"
dest: /usr/local/bin/{{ item.filename }}
remote_src: yes
owner: root
group: root
mode: 0755
- name: Cleanup tempfolder
ansible.builtin.file:
path: "{{ archive.path }}"
state: absent
when: archive.path is defined

View File

@@ -0,0 +1,54 @@
- name: Download & install static binaries
ansible.builtin.get_url:
url: "{{ item.url }}"
url_username: "{{ item.username | default(omit) }}"
url_password: "{{ item.password | default(omit) }}"
dest: /usr/local/bin/{{ item.filename }}
owner: root
group: root
mode: 0755
register: staticbinary_download
loop: "{{ dependencies.static_binaries | selectattr('archive', 'undefined') }}"
loop_control:
label: "{{ item.filename }}"
retries: 5
delay: 5
until: staticbinary_download is not failed
- name: Download, extract & install archived static binaries
include_tasks: dependencies.archive_compressed.yml
loop: "{{ dependencies.static_binaries | rejectattr('archive', 'undefined') | selectattr('archive', 'equalto', 'compressed') }}"
loop_control:
label: "{{ item.filename }}"
- name: Install ansible-galaxy collections
ansible.builtin.shell:
cmd: ansible-galaxy collection install {{ item }}
register: collections
loop: "{{ dependencies.ansible_galaxy_collections }}"
retries: 5
delay: 5
until: collections is not failed
- name: Install distro packages
ansible.builtin.apt:
pkg: "{{ dependencies.packages.apt }}"
state: latest
update_cache: yes
install_recommends: no
- name: Upgrade all packages
ansible.builtin.apt:
name: '*'
state: latest
update_cache: yes
- name: Install additional python packages
ansible.builtin.pip:
name: "{{ dependencies.packages.pip }}"
state: latest
- name: Cleanup apt cache
ansible.builtin.apt:
autoremove: yes
purge: yes

View File

@@ -0,0 +1,5 @@
- name: Clone git repository
ansible.builtin.git:
repo: "{{ platform.gitops.repository.uri }}"
version: "{{ platform.gitops.repository.revision }}"
dest: /opt/metacluster/git-repositories/gitops

View File

@@ -0,0 +1,19 @@
- name: Add helm repositories
kubernetes.core.helm_repository:
name: "{{ item.name }}"
repo_url: "{{ item.url }}"
state: present
loop: "{{ platform.helm_repositories }}"
- name: Fetch helm charts
ansible.builtin.command:
cmd: helm fetch {{ item.value.helm.chart }} --untar --version {{ item.value.helm.version }}
chdir: /opt/metacluster/helm-charts
when: item.value.helm is defined
register: helmcharts
loop: "{{ lookup('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
retries: 5
delay: 5
until: helmcharts is not failed

View File

@@ -0,0 +1,43 @@
- name: Download & install K3s binary
ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/{{ platform.k3s.version }}/k3s
dest: /usr/local/bin/k3s
owner: root
group: root
mode: 0755
register: download
until: download is not failed
retries: 3
delay: 10
- name: Download K3s images tarball
ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/{{ platform.k3s.version }}/k3s-airgap-images-amd64.tar.gz
dest: /var/lib/rancher/k3s/agent/images
register: download
until: download is not failed
retries: 3
delay: 10
- name: Download K3s install script
ansible.builtin.get_url:
url: https://get.k3s.io
dest: /opt/metacluster/k3s/install.sh
owner: root
group: root
mode: 0755
register: download
until: download is not failed
retries: 3
delay: 10
- name: Inject manifests
ansible.builtin.template:
src: helmchartconfig.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ item.name }}-config.yaml
owner: root
group: root
mode: 0600
loop: "{{ platform.packaged_components }}"
loop_control:
label: "{{ item.name }}"

View File

@@ -0,0 +1,26 @@
- name: Initialize tempfile
ansible.builtin.tempfile:
state: directory
register: kubeadm
- name: Download kubeadm binary
ansible.builtin.get_url:
url: https://dl.k8s.io/release/{{ components.clusterapi.workload.version.k8s }}/bin/linux/amd64/kubeadm
dest: "{{ kubeadm.path }}/kubeadm"
mode: u+x
- name: Retrieve container images list
ansible.builtin.shell:
cmd: "{{ kubeadm.path }}/kubeadm config images list --kubernetes-version {{ components.clusterapi.workload.version.k8s }}"
register: kubeadmimages
- name: Store list of container images for reference
ansible.builtin.copy:
dest: /opt/metacluster/cluster-api/imagelist
content: "{{ kubeadmimages.stdout }}"
- name: Cleanup tempfile
ansible.builtin.file:
path: "{{ kubeadm.path }}"
state: absent
when: kubeadm.path is defined

View File

@@ -0,0 +1,30 @@
- name: Create folder structure(s)
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /etc/rancher/k3s
- /opt/metacluster/cluster-api/bootstrap-kubeadm/{{ components.clusterapi.management.version.base }}
- /opt/metacluster/cluster-api/cert-manager/{{ components.clusterapi.management.version.cert_manager }}
- /opt/metacluster/cluster-api/cluster-api/{{ components.clusterapi.management.version.base }}
- /opt/metacluster/cluster-api/cni-calico/{{ components.clusterapi.workload.version.calico }}
- /opt/metacluster/cluster-api/control-plane-kubeadm/{{ components.clusterapi.management.version.base }}
- /opt/metacluster/cluster-api/infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}
- /opt/metacluster/cluster-api/ipam-in-cluster/{{ components.clusterapi.management.version.ipam_incluster }}
- /opt/metacluster/container-images
- /opt/metacluster/git-repositories/gitops
- /opt/metacluster/helm-charts
- /opt/metacluster/k3s
- /opt/metacluster/kube-vip
- /opt/workloadcluster/node-templates
- /var/lib/rancher/k3s/agent/images
- /var/lib/rancher/k3s/server/manifests
- import_tasks: dependencies.yml
- import_tasks: k3s.yml
- import_tasks: helm.yml
- import_tasks: git.yml
- import_tasks: manifests.yml
- import_tasks: kubeadm.yml
- import_tasks: containerimages.yml
- import_tasks: nodetemplates.yml

View File

@@ -0,0 +1,86 @@
- block:
- name: Aggregate chart_values into dict
ansible.builtin.set_fact:
chart_values: "{{ chart_values | default({}) | combine({ (item.key | regex_replace('[^A-Za-z0-9]', '')): { 'chart_values': (item.value.helm.chart_values | from_yaml) } }) }}"
when: item.value.helm.chart_values is defined
loop: "{{ lookup('ansible.builtin.dict', components) }}"
loop_control:
label: "{{ item.key }}"
- name: Write dict to vars_file
ansible.builtin.copy:
dest: /opt/firstboot/ansible/vars/metacluster.yml
content: >-
{{
{ 'components': (
chart_values |
combine({ 'clusterapi': components.clusterapi }) |
combine({ 'kubevip' : components.kubevip }) )
} | to_nice_yaml(indent=2, width=4096)
}}
- name: Download ClusterAPI manifests
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: /opt/metacluster/cluster-api/{{ item.dest }}
register: clusterapi_manifests
loop:
# This list is based on `clusterctl config repositories`
# Note: Each manifest also needs a `metadata.yaml` file stored in the respective folder
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/bootstrap-components.yaml
dest: bootstrap-kubeadm/{{ components.clusterapi.management.version.base }}/bootstrap-components.yaml
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/core-components.yaml
dest: cluster-api/{{ components.clusterapi.management.version.base }}/core-components.yaml
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/control-plane-components.yaml
dest: control-plane-kubeadm/{{ components.clusterapi.management.version.base }}/control-plane-components.yaml
# This downloads the same `metadata.yaml` file to three separate folders
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/metadata.yaml
dest: bootstrap-kubeadm/{{ components.clusterapi.management.version.base }}/metadata.yaml
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/metadata.yaml
dest: cluster-api/{{ components.clusterapi.management.version.base }}/metadata.yaml
- url: https://github.com/kubernetes-sigs/cluster-api/releases/download/{{ components.clusterapi.management.version.base }}/metadata.yaml
dest: control-plane-kubeadm/{{ components.clusterapi.management.version.base }}/metadata.yaml
# The vsphere infrastructure provider requires multiple files (`cluster-template.yaml` and `metadata.yaml` on top of default files)
- url: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/{{ components.clusterapi.management.version.infrastructure_vsphere }}/infrastructure-components.yaml
dest: infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/infrastructure-components.yaml
- url: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/{{ components.clusterapi.management.version.infrastructure_vsphere }}/cluster-template.yaml
dest: infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/cluster-template.yaml
- url: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/{{ components.clusterapi.management.version.infrastructure_vsphere }}/metadata.yaml
dest: infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/metadata.yaml
# Additionally, cert-manager is a prerequisite
- url: https://github.com/cert-manager/cert-manager/releases/download/{{ components.clusterapi.management.version.cert_manager }}/cert-manager.yaml
dest: cert-manager/{{ components.clusterapi.management.version.cert_manager }}/cert-manager.yaml
# Finally, workload clusters will need a CNI plugin
- url: https://raw.githubusercontent.com/projectcalico/calico/{{ components.clusterapi.workload.version.calico }}/manifests/calico.yaml
dest: cni-calico/{{ components.clusterapi.workload.version.calico }}/calico.yaml
# IPAM in-cluster provider (w/ metadata.yaml)
- url: https://github.com/telekom/cluster-api-ipam-provider-in-cluster/releases/download/{{ components.clusterapi.management.version.ipam_incluster }}/ipam-components.yaml
dest: ipam-in-cluster/{{ components.clusterapi.management.version.ipam_incluster }}/ipam-components.yaml
- url: https://github.com/telekom/cluster-api-ipam-provider-in-cluster/releases/download/{{ components.clusterapi.management.version.ipam_incluster }}/metadata.yaml
dest: ipam-in-cluster/{{ components.clusterapi.management.version.ipam_incluster }}/metadata.yaml
loop_control:
label: "{{ item.url | basename }}"
retries: 5
delay: 5
until: clusterapi_manifests is not failed
- name: Download kube-vip RBAC manifest
ansible.builtin.get_url:
url: https://kube-vip.io/manifests/rbac.yaml
dest: /opt/metacluster/kube-vip/rbac.yaml
register: kubevip_manifest
retries: 5
delay: 5
until: kubevip_manifest is not failed
# - name: Inject manifests
# ansible.builtin.template:
# src: "{{ item.type }}.j2"
# dest: /var/lib/rancher/k3s/server/manifests/{{ item.name }}-manifest.yaml
# owner: root
# group: root
# mode: 0600
# loop: "{{ lookup('ansible.builtin.dict', components) | map(attribute='value.manifests') | list | select('defined') | flatten }}"
# loop_control:
# label: "{{ item.type + '/' + item.name }}"

View File

@@ -0,0 +1,4 @@
- name: Download node-template image
ansible.builtin.uri:
url: "{{ components.clusterapi.workload.node_template.url }}"
dest: /opt/workloadcluster/node-templates/{{ components.clusterapi.workload.node_template.url | basename}}

View File

@@ -0,0 +1,8 @@
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: {{ item.name }}
namespace: {{ item.namespace }}
spec:
valuesContent: |-
{{ item.config }}

View File

@@ -0,0 +1,26 @@
---
- hosts: 127.0.0.1
connection: local
gather_facts: true
vars_files:
- defaults.yml
- metacluster.yml
# become: true
roles:
- vapp
- network
- preflight
- users
- disks
- metacluster
- workloadcluster
- tty
- cleanup
handlers:
- name: Apply manifests
kubernetes.core.k8s:
src: "{{ item }}"
state: present
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ query('ansible.builtin.fileglob', '/var/lib/rancher/k3s/server/manifests/*.yaml') | sort }}"
ignore_errors: yes

View File

@@ -0,0 +1,14 @@
import netaddr
def netaddr_iter_iprange(ip_start, ip_end):
return [str(ip) for ip in netaddr.iter_iprange(ip_start, ip_end)]
class FilterModule(object):
''' Ansible filter. Interface to netaddr methods.
https://pypi.org/project/netaddr/
'''
def filters(self):
return {
'netaddr_iter_iprange': netaddr_iter_iprange
}

View File

@@ -0,0 +1,131 @@
- block:
- name: Install step-ca chart
kubernetes.core.helm:
name: step-certificates
chart_ref: /opt/metacluster/helm-charts/step-certificates
release_namespace: step-ca
create_namespace: yes
# Unable to use REST api based readycheck due to missing ingress
wait: yes
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.stepcertificates.chart_values }}"
- name: Retrieve configmap w/ root certificate
kubernetes.core.k8s_info:
kind: ConfigMap
name: step-certificates-certs
namespace: step-ca
kubeconfig: "{{ kubeconfig.path }}"
register: stepca_cm_certs
- name: Create target namespaces
kubernetes.core.k8s:
kind: Namespace
name: "{{ item }}"
state: present
kubeconfig: "{{ kubeconfig.path }}"
loop:
- argo-cd
# - kube-system
- name: Store root certificate in namespaced configmaps/secrets
kubernetes.core.k8s:
state: present
template: "{{ item.kind }}.j2"
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
name: "{{ item.name }}"
namespace: "{{ item.namespace }}"
annotations: "{{ item.annotations | default('{}') | indent(width=4, first=True) }}"
labels: "{{ item.labels | default('{}') | indent(width=4, first=True) }}"
data: "{{ item.data }}"
loop:
- name: argocd-tls-certs-cm
namespace: argo-cd
kind: configmap
annotations: |
meta.helm.sh/release-name: argo-cd
meta.helm.sh/release-namespace: argo-cd
labels: |
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
- key: git.{{ vapp['metacluster.fqdn'] }}
value: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] }}"
- name: step-certificates-certs
namespace: kube-system
kind: secret
data:
- key: root_ca.crt
value: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] | b64encode }}"
loop_control:
label: "{{ item.kind + '/' + item.name + ' (' + item.namespace + ')' }}"
- name: Configure step-ca passthrough ingress
ansible.builtin.template:
src: ingressroutetcp.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: step-ca
namespace: step-ca
config: |2
entryPoints:
- websecure
routes:
- match: HostSNI(`ca.{{ vapp['metacluster.fqdn'] }}`)
services:
- name: step-certificates
port: 443
tls:
passthrough: true
notify:
- Apply manifests
- name: Inject step-ca certificate into traefik container
ansible.builtin.blockinfile:
path: /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
block: |2
volumes:
- name: step-certificates-certs
mountPath: /step-ca
type: secret
env:
- name: LEGO_CA_CERTIFICATES
value: /step-ca/root_ca.crt
marker: ' # {mark} ANSIBLE MANAGED BLOCK'
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers
- name: Retrieve step-ca configuration
kubernetes.core.k8s_info:
kind: ConfigMap
name: step-certificates-config
namespace: step-ca
kubeconfig: "{{ kubeconfig.path }}"
register: stepca_cm_config
- name: Install root CA in system truststore
ansible.builtin.shell:
cmd: >-
step ca bootstrap \
--ca-url=https://ca.{{ vapp['metacluster.fqdn'] }} \
--fingerprint={{ stepca_cm_config.resources[0].data['defaults.json'] | from_json | json_query('fingerprint') }} \
--install \
--force
update-ca-certificates
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,139 @@
- block:
- name: Install gitea chart
kubernetes.core.helm:
name: gitea
chart_ref: /opt/metacluster/helm-charts/gitea
release_namespace: gitea
create_namespace: yes
wait: no
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.gitea.chart_values }}"
- name: Ensure gitea API availability
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/healthz
method: GET
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'pass'
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
- name: Configure additional SSH ingress
ansible.builtin.template:
src: ingressroutetcp.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: gitea-ssh
namespace: gitea
config: |2
entryPoints:
- ssh
routes:
- match: HostSNI(`*`)
services:
- name: gitea-ssh
port: 22
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers
- name: Generate gitea API token
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/v1/users/administrator/tokens
method: POST
user: administrator
password: "{{ vapp['metacluster.password'] }}"
force_basic_auth: yes
body:
name: token_init_{{ lookup('password', '/dev/null length=5 chars=ascii_letters,digits') }}
register: gitea_api_token
- name: Retrieve existing gitea configuration
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/v1/repos/search
method: GET
register: gitea_existing_config
- block:
- name: Register SSH public key
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/v1/user/keys
method: POST
headers:
Authorization: token {{ gitea_api_token.json.sha1 }}
body:
key: "{{ gitops_sshkey.public_key }}"
read_only: false
title: GitOps
- name: Create organization(s)
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/v1/orgs
method: POST
headers:
Authorization: token {{ gitea_api_token.json.sha1 }}
body: "{{ item }}"
loop:
- full_name: Meta-cluster
description: Meta-cluster configuration items
username: mc
website: https://git.{{ vapp['metacluster.fqdn'] }}/mc
location: '[...]'
visibility: public
- full_name: Workload-cluster
description: Workload-cluster configuration items
username: wl
website: https://git.{{ vapp['metacluster.fqdn'] }}/wl
location: '[...]'
visibility: public
loop_control:
label: "{{ item.full_name }}"
- name: Create repositories
ansible.builtin.uri:
url: https://git.{{ vapp['metacluster.fqdn'] }}/api/v1/orgs/{{ item.organization }}/repos
method: POST
headers:
Authorization: token {{ gitea_api_token.json.sha1 }}
body: "{{ item.body }}"
loop:
- organization: mc
body:
name: GitOps.Config
# auto_init: true
# default_branch: main
description: GitOps manifests
- organization: wl
body:
name: Template.GitOps.Config
# auto_init: true
# default_branch: main
description: GitOps manifests
loop_control:
label: "{{ item.organization + '/' + item.body.name }}"
- name: Rebase/Push source gitops repository
ansible.builtin.shell:
cmd: |
git config --local http.sslVerify false
git remote set-url origin https://administrator:{{ vapp['metacluster.password'] | urlencode }}@git.{{ vapp['metacluster.fqdn'] }}/mc/GitOps.Config.git
git push
chdir: /opt/metacluster/git-repositories/gitops
when: (gitea_existing_config.json is undefined) or (gitea_existing_config.json.data | length == 0)
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,70 @@
- block:
- name: Install argo-cd chart
kubernetes.core.helm:
name: argo-cd
chart_ref: /opt/metacluster/helm-charts/argo-cd
release_namespace: argo-cd
create_namespace: yes
wait: no
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.argocd.chart_values }}"
- name: Ensure argo-cd API availability
ansible.builtin.uri:
url: https://gitops.{{ vapp['metacluster.fqdn'] }}/api/version
method: GET
register: api_readycheck
until:
- api_readycheck.json.Version is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
- name: Generate argo-cd API token
ansible.builtin.uri:
url: https://gitops.{{ vapp['metacluster.fqdn'] }}/api/v1/session
method: POST
force_basic_auth: yes
body:
username: admin
password: "{{ vapp['metacluster.password'] }}"
register: argocd_api_token
- name: Configure metacluster-gitops repository
ansible.builtin.template:
src: gitrepo.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: argocd-gitrepo-metacluster
namespace: argo-cd
uid: "{{ lookup('ansible.builtin.password', '/dev/null length=5 chars=ascii_lowercase,digits seed=inventory_hostname') }}"
privatekey: "{{ lookup('ansible.builtin.file', '~/.ssh/git_rsa_id') | indent(4, true) }}"
notify:
- Apply manifests
- name: Create applicationset
ansible.builtin.template:
src: applicationset.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: argocd-applicationset-metacluster
namespace: argo-cd
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,26 @@
- name: Configure traefik dashboard ingress
ansible.builtin.template:
src: ingressroute.j2
dest: /var/lib/rancher/k3s/server/manifests/{{ _template.name }}-manifest.yaml
owner: root
group: root
mode: 0600
vars:
_template:
name: traefik-dashboard
namespace: kube-system
config: |2
entryPoints:
- web
- websecure
routes:
- kind: Rule
match: Host(`ingress.{{ vapp['metacluster.fqdn'] }}`)
services:
- kind: TraefikService
name: api@internal
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers

View File

@@ -0,0 +1,13 @@
- name: Configure fallback name resolution
ansible.builtin.lineinfile:
path: /etc/hosts
line: "{{ vapp['guestinfo.ipaddress'] }} {{ item + '.' + vapp['metacluster.fqdn'] }}"
state: present
loop:
# TODO: Make this list dynamic
- ca
- git
- gitops
- ingress
- registry
- storage

View File

@@ -0,0 +1,74 @@
- name: Store custom configuration files
ansible.builtin.copy:
dest: "{{ item.filename }}"
content: "{{ item.content }}"
loop:
- filename: /etc/rancher/k3s/config.yaml
content: |
kubelet-arg:
- "config=/etc/rancher/k3s/kubelet.config"
- filename: /etc/rancher/k3s/kubelet.config
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
shutdownGracePeriod: 180s
shtudownGracePeriodCriticalPods: 60s
loop_control:
label: "{{ item.filename }}"
- name: Gather service facts
ansible.builtin.service_facts:
# Module requires no attributes
- name: Install K3s
ansible.builtin.command:
cmd: ./install.sh
chdir: /opt/metacluster/k3s
environment:
INSTALL_K3S_SKIP_DOWNLOAD: 'true'
INSTALL_K3S_EXEC: "server --cluster-init --token {{ vapp['metacluster.token'] | trim }} --tls-san {{ vapp['metacluster.vip'] }} --disable local-storage --config /etc/rancher/k3s/config.yaml"
when: ansible_facts.services['k3s.service'] is undefined
- name: Debug possible taints on k3s node
ansible.builtin.shell:
cmd: >-
while true;
do
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers | awk '{print strftime("%H:%M:%S"),$0;fflush();}' >> /var/log/taintlog
sleep 1
done
async: 1800
poll: 0
- name: Ensure API availability
ansible.builtin.uri:
url: https://{{ vapp['guestinfo.ipaddress'] }}:6443/livez?verbose
method: GET
validate_certs: no
status_code: [200, 401]
register: api_readycheck
until: api_readycheck.json.apiVersion is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.medium }}"
- name: Install kubectl tab-completion
ansible.builtin.shell:
cmd: kubectl completion bash | tee /etc/bash_completion.d/kubectl
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: kubeconfig
- name: Retrieve kubeconfig
ansible.builtin.command:
cmd: kubectl config view --raw
register: kubectl_config
- name: Store kubeconfig in tempfile
ansible.builtin.copy:
dest: "{{ kubeconfig.path }}"
content: "{{ kubectl_config.stdout }}"
mode: 0600
no_log: true

View File

@@ -0,0 +1,27 @@
- name: Generate kube-vip manifest
ansible.builtin.shell:
cmd: >-
ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:{{ components.kubevip.version }} vip \
/kube-vip manifest daemonset \
--interface eth0 \
--address {{ vapp['metacluster.vip'] }} \
--inCluster \
--taint \
--controlplane \
--services \
--arp \
--leaderElection
register: kubevip_manifest
- name: Inject manifests
ansible.builtin.copy:
dest: /var/lib/rancher/k3s/server/manifests/kubevip-manifest.yaml
content: |
{{ lookup('ansible.builtin.file', '/opt/metacluster/kube-vip/rbac.yaml') }}
---
{{ kubevip_manifest.stdout | replace('imagePullPolicy: Always', 'imagePullPolicy: IfNotPresent') }}
notify:
- Apply manifests
- name: Trigger handlers
ansible.builtin.meta: flush_handlers

View File

@@ -0,0 +1,10 @@
- import_tasks: init.yml
- import_tasks: k3s.yml
- import_tasks: assets.yml
- import_tasks: kube-vip.yml
- import_tasks: storage.yml
- import_tasks: ingress.yml
- import_tasks: certauthority.yml
- import_tasks: registry.yml
- import_tasks: git.yml
- import_tasks: gitops.yml

View File

@@ -0,0 +1,71 @@
- block:
- name: Install harbor chart
kubernetes.core.helm:
name: harbor
chart_ref: /opt/metacluster/helm-charts/harbor
release_namespace: harbor
create_namespace: yes
wait: no
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.harbor.chart_values }}"
- name: Ensure harbor API availability
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/health
method: GET
register: api_readycheck
until:
- api_readycheck.json.status is defined
- api_readycheck.json.status == 'healthy'
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
- name: Push images to registry
ansible.builtin.shell:
cmd: >-
skopeo copy \
--insecure-policy \
--dest-tls-verify=false \
--dest-creds admin:{{ vapp['metacluster.password'] }} \
docker-archive:./{{ item | basename }} \
docker://registry.{{ vapp['metacluster.fqdn'] }}/library/$( \
skopeo list-tags \
--insecure-policy \
docker-archive:./{{ item | basename }} | \
jq -r '.Tags[0]')
chdir: /opt/metacluster/container-images/
register: push_result
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
until: push_result is not failed
- name: Get all stored container images (=artifacts)
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/search?q=library
method: GET
register: registry_artifacts
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[1]]) | unique | sort }}"
loop: "{{ registry_artifacts.json.repository | json_query('[*].repository_name') }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
data: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201, 401]
body_format: json

View File

@@ -0,0 +1,26 @@
- block:
- name: Install longhorn chart
kubernetes.core.helm:
name: longhorn
chart_ref: /opt/metacluster/helm-charts/longhorn
release_namespace: longhorn-system
create_namespace: yes
wait: no
kubeconfig: "{{ kubeconfig.path }}"
values: "{{ components.longhorn.chart_values }}"
- name: Ensure longhorn API availability
ansible.builtin.uri:
url: https://storage.{{ vapp['metacluster.fqdn'] }}/v1
method: GET
register: api_readycheck
until:
- api_readycheck is not failed
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json

View File

@@ -0,0 +1,185 @@
- block:
- name: Generate vCenter API token
ansible.builtin.uri:
url: https://{{ vapp['hv.fqdn'] }}/api/session
method: POST
headers:
Authorization: Basic {{ ( vapp['hv.username'] ~ ':' ~ vapp['hv.password'] ) | b64encode }}
register: vcenterapi_token
- name: Retrieve vCenter API session details
ansible.builtin.uri:
url: https://{{ vapp['hv.fqdn'] }}/api/session
method: GET
headers:
vmware-api-session-id: "{{ vcenter_api_token.json }}"
register: vcenter_session
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201]
body_format: json
- name: Configure clusterctl
ansible.builtin.template:
src: clusterctl.j2
dest: /opt/metacluster/cluster-api/clusterctl.yaml
vars:
_template:
version:
base: "{{ components.clusterapi.management.version.base }}"
cert_manager: "{{ components.clusterapi.management.version.cert_manager }}"
infrastructure_vsphere: "{{ components.clusterapi.management.version.infrastructure_vsphere }}"
ipam_incluster: "{{ components.clusterapi.management.version.ipam_incluster }}"
hv:
fqdn: "{{ vapp['hv.fqdn'] }}"
tlsthumbprint: "{{ tls_thumbprint.stdout }}"
username: "{{ vcenter_session.json.user }}"
password: "{{ vapp['hv.password'] }}"
datacenter: "{{ vcenter_info.datacenter }}"
datastore: "{{ vcenter_info.datastore }}"
network: "{{ vcenter_info.network }}"
resourcepool: "{{ vcenter_info.resourcepool }}"
folder: "{{ vcenter_info.folder }}"
cluster:
nodetemplate: "{{ (components.clusterapi.workload.node_template.url | basename | split('.'))[:-1] | join('.') }}"
publickey: "{{ vapp['guestinfo.rootsshkey'] }}"
version: "{{ components.clusterapi.workload.version.k8s }}"
vip: "{{ vapp['workloadcluster.vip'] }}"
- name: Update image references to use local registry
ansible.builtin.replace:
dest: "{{ item.root + '/' + item.path }}"
regexp: '([ ]+image:[ "]+)(?!({{ _template.pattern }}|"{{ _template.pattern }}))'
replace: '\1{{ _template.pattern }}'
vars:
_template:
pattern: registry.{{ vapp['metacluster.fqdn'] }}/library/
loop: "{{ lookup('community.general.filetree', '/opt/metacluster/cluster-api') }}"
loop_control:
label: "{{ item.path }}"
when:
- item.path is search('.yaml')
- item.path is not search("clusterctl.yaml|metadata.yaml")
- name: Generate kustomization template
ansible.builtin.template:
src: kustomization.cluster-template.j2
dest: /opt/metacluster/cluster-api/infrastructure-vsphere/{{ components.clusterapi.management.version.infrastructure_vsphere }}/kustomization.yaml
vars:
_template:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
rootca: "{{ stepca_cm_certs.resources[0].data['root_ca.crt'] }}"
script:
# Base64 encoded; to avoid variable substitution when clusterctl parses the cluster-template.yml
encoded: IyEvYmluL2Jhc2gKdm10b29sc2QgLS1jbWQgJ2luZm8tZ2V0IGd1ZXN0aW5mby5vdmZFbnYnID4gL3RtcC9vdmZlbnYKCklQQWRkcmVzcz0kKHNlZCAtbiAncy8uKlByb3BlcnR5IG9lOmtleT0iZ3Vlc3RpbmZvLmludGVyZmFjZS4wLmlwLjAuYWRkcmVzcyIgb2U6dmFsdWU9IlwoW14iXSpcKS4qL1wxL3AnIC90bXAvb3ZmZW52KQpTdWJuZXRNYXNrPSQoc2VkIC1uICdzLy4qUHJvcGVydHkgb2U6a2V5PSJndWVzdGluZm8uaW50ZXJmYWNlLjAuaXAuMC5uZXRtYXNrIiBvZTp2YWx1ZT0iXChbXiJdKlwpLiovXDEvcCcgL3RtcC9vdmZlbnYpCkdhdGV3YXk9JChzZWQgLW4gJ3MvLipQcm9wZXJ0eSBvZTprZXk9Imd1ZXN0aW5mby5pbnRlcmZhY2UuMC5yb3V0ZS4wLmdhdGV3YXkiIG9lOnZhbHVlPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKRE5TPSQoc2VkIC1uICdzLy4qUHJvcGVydHkgb2U6a2V5PSJndWVzdGluZm8uZG5zLnNlcnZlcnMiIG9lOnZhbHVlPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKTUFDQWRkcmVzcz0kKHNlZCAtbiAncy8uKnZlOkFkYXB0ZXIgdmU6bWFjPSJcKFteIl0qXCkuKi9cMS9wJyAvdG1wL292ZmVudikKCm1hc2syY2lkcigpIHsKICBjPTAKICB4PTAkKCBwcmludGYgJyVvJyAkezEvLy4vIH0gKQoKICB3aGlsZSBbICR4IC1ndCAwIF07IGRvCiAgICBsZXQgYys9JCgoeCUyKSkgJ3g+Pj0xJwogIGRvbmUKCiAgZWNobyAkYwp9CgpQcmVmaXg9JChtYXNrMmNpZHIgJFN1Ym5ldE1hc2spCgpjYXQgPiAvZXRjL25ldHBsYW4vMDEtbmV0Y2ZnLnlhbWwgPDxFT0YKbmV0d29yazoKICB2ZXJzaW9uOiAyCiAgcmVuZGVyZXI6IG5ldHdvcmtkCiAgZXRoZXJuZXRzOgogICAgaWQwOgogICAgICBzZXQtbmFtZTogZXRoMAogICAgICBtYXRjaDoKICAgICAgICBtYWNhZGRyZXNzOiAkTUFDQWRkcmVzcwogICAgICBhZGRyZXNzZXM6CiAgICAgICAgLSAkSVBBZGRyZXNzLyRQcmVmaXgKICAgICAgZ2F0ZXdheTQ6ICRHYXRld2F5CiAgICAgIG5hbWVzZXJ2ZXJzOgogICAgICAgIGFkZHJlc3NlcyA6IFskRE5TXQpFT0YKcm0gL2V0Yy9uZXRwbGFuLzUwKi55YW1sIC1mCgpzdWRvIG5ldHBsYW4gYXBwbHk=
runcmds:
- update-ca-certificates
- bash /root/network.sh
- name: Store custom cluster-template
ansible.builtin.copy:
dest: /opt/metacluster/cluster-api/custom-cluster-template.yaml
content: "{{ lookup('kubernetes.core.kustomize', dir='/opt/metacluster/cluster-api/infrastructure-vsphere/' + components.clusterapi.management.version.infrastructure_vsphere ) }}"
- name: Initialize Cluster API management cluster
ansible.builtin.shell:
cmd: >-
clusterctl init \
-v5 \
--infrastructure vsphere:{{ components.clusterapi.management.version.infrastructure_vsphere }} \
--ipam in-cluster:{{ components.clusterapi.management.version.ipam_incluster }} \
--config ./clusterctl.yaml \
--kubeconfig {{ kubeconfig.path }}
chdir: /opt/metacluster/cluster-api
- name: Ensure CAPI/CAPV controller availability
kubernetes.core.k8s_info:
kind: Deployment
name: "{{ item.name }}"
namespace: "{{ item.namespace }}"
wait: true
kubeconfig: "{{ kubeconfig.path }}"
loop:
- name: capi-controller-manager
namespace: capi-system
- name: capv-controller-manager
namespace: capv-system
loop_control:
label: "{{ item.name }}"
- name: Parse vApp for workload cluster sizing
ansible.builtin.set_fact:
clustersize: >-
{{ {
'controlplane': vapp['deployment.type'] | regex_findall('^cp(\d)+') | first,
'workers': vapp['deployment.type'] | regex_findall('w(\d)+$') | first
} }}
- name: Generate workload cluster manifest
ansible.builtin.shell:
cmd: >-
clusterctl generate cluster \
{{ vapp['workloadcluster.name'] | lower }} \
--control-plane-machine-count {{ clustersize.controlplane }} \
--worker-machine-count {{ clustersize.workers }} \
--from ./custom-cluster-template.yaml \
--config ./clusterctl.yaml \
--kubeconfig {{ kubeconfig.path }}
chdir: /opt/metacluster/cluster-api
register: clusterctl_newcluster
# TODO: move to git repo
- name: Save workload cluster manifest
ansible.builtin.copy:
dest: /opt/metacluster/cluster-api/new-cluster.yaml
content: "{{ clusterctl_newcluster.stdout }}"
- name: Apply workload cluster manifest
kubernetes.core.k8s:
definition: >-
{{ clusterctl_newcluster.stdout }}
wait: yes
kubeconfig: "{{ kubeconfig.path }}"
# TODO: move to git repo
- name: Wait for cluster to be available
ansible.builtin.shell:
cmd: >-
kubectl wait clusters.cluster.x-k8s.io/{{ vapp['workloadcluster.name'] | lower }} \
--for=condition=Ready \
--timeout 0s
register: cluster_readycheck
until: cluster_readycheck is succeeded
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.long }}"
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: capi_kubeconfig
- name: Retrieve kubeconfig
ansible.builtin.shell:
cmd: >-
clusterctl get kubeconfig \
{{ vapp['workloadcluster.name'] | lower }} \
--kubeconfig {{ kubeconfig.path }}
register: capi_kubectl_config
- name: Store kubeconfig in tempfile
ansible.builtin.copy:
dest: "{{ capi_kubeconfig.path }}"
content: "{{ capi_kubectl_config.stdout }}"
mode: 0600
no_log: true
# TODO: move to git repo
- name: Apply cni plugin manifest
kubernetes.core.k8s:
src: /opt/metacluster/cluster-api/cni-calico/{{ components.clusterapi.workload.version.calico }}/calico.yaml
state: present
wait: yes
kubeconfig: "{{ capi_kubeconfig.path }}"
# TODO: move to git repo

View File

@@ -0,0 +1,44 @@
- block:
- name: Generate service account in workload cluster
kubernetes.core.k8s:
template: serviceaccount.j2
state: present
- name: Retrieve service account bearer token
kubernetes.core.k8s_info:
kind: ServiceAccount
name: "{{ _template.account.name }}"
namespace: "{{ _template.account.namespace }}"
register: workloadcluster_serviceaccount
- name: Retrieve service account bearer token
kubernetes.core.k8s_info:
kind: Secret
name: "{{ workloadcluster_serviceaccount.resources | json_query('[].secrets[].name') | first }}"
namespace: "{{ _template.account.namespace }}"
register: workloadcluster_bearertoken
- name: Register workload cluster in argo-cd
kubernetes.core.k8s:
template: cluster.j2
state: present
kubeconfig: "{{ kubeconfig.path }}"
vars:
_template:
cluster:
name: "{{ vapp['workloadcluster.name'] | lower }}"
secret: argocd-cluster-{{ vapp['workloadcluster.name'] | lower }}
url: https://{{ vapp['workloadcluster.vip'] }}:6443
token: "{{ workloadcluster_bearertoken.resources | json_query('[].data.token') }}"
vars:
_template:
account:
name: argocd-sa
namespace: default
clusterrolebinding:
name: argocd-crb
module_defaults:
group/k8s:
kubeconfig: "{{ capi_kubeconfig.path }}"

View File

@@ -0,0 +1,75 @@
- name: Gather hypervisor details
ansible.builtin.shell:
cmd: govc ls -L {{ item.moref }} | awk -F/ '{print ${{ item.part }}}'
environment:
GOVC_INSECURE: '1'
GOVC_URL: "{{ vapp['hv.fqdn'] }}"
GOVC_USERNAME: "{{ vapp['hv.username'] }}"
GOVC_PASSWORD: "{{ vapp['hv.password'] }}"
register: govc_inventory
loop:
- attribute: cluster
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "runtime").Val.Host | .Type + ":" + .Value')
part: (NF-1)
- attribute: datacenter
moref: VirtualMachine:{{ moref_id }}
part: 2
- attribute: datastore
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "datastore").Val.ManagedObjectReference | .[].Type + ":" + .[].Value')
part: NF
- attribute: folder
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "parent").Val | .Type + ":" + .Value')
part: 0
# - attribute: host
# moref: >-
# $(govc object.collect -json VirtualMachine:{{ moref_id }} | \
# jq -r '.[] | select(.Name == "runtime").Val.Host | .Type + ":" + .Value')
# part: NF
- attribute: network
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "network").Val.ManagedObjectReference | .[].Type + ":" + .[].Value')
part: NF
- attribute: resourcepool
moref: >-
$(govc object.collect -json VirtualMachine:{{ moref_id }} | \
jq -r '.[] | select(.Name == "resourcePool").Val | .Type + ":" + .Value')
part: 0
loop_control:
label: "{{ item.attribute }}"
- name: Retrieve hypervisor TLS thumbprint
ansible.builtin.shell:
cmd: openssl s_client -connect {{ vapp['hv.fqdn'] }}:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin | awk -F'=' '{print $2}'
register: tls_thumbprint
- name: Store hypervisor details in dictionary
ansible.builtin.set_fact:
vcenter_info: "{{ vcenter_info | default({}) | combine({ item.item.attribute : item.stdout }) }}"
loop: "{{ govc_inventory.results }}"
loop_control:
label: "{{ item.item.attribute }}"
- name: Configure network protocol profile on hypervisor
ansible.builtin.shell:
cmd: >-
npp-prepper \
--server "{{ vapp['hv.fqdn'] }}" \
--username "{{ vapp['hv.username'] }}" \
--password "{{ vapp['hv.password'] }}" \
dc \
--name "{{ vcenter_info.datacenter }}" \
--portgroup "{{ vcenter_info.network }}" \
--startaddress {{ vapp['ippool.startip'] }} \
--endaddress {{ vapp['ippool.endip'] }} \
--netmask {{ (vapp['guestinfo.ipaddress'] + '/' + vapp['guestinfo.prefixlength']) | ansible.utils.ipaddr('netmask') }} \
{{ vapp['guestinfo.dnsserver'] | split(',') | map('trim') | map('regex_replace', '^', '--dnsserver ') | join(' ') }} \
--dnsdomain {{ vapp['metacluster.fqdn'] }} \
--gateway {{ vapp['guestinfo.gateway'] }} \
--force

View File

@@ -0,0 +1,5 @@
- import_tasks: hypervisor.yml
- import_tasks: registry.yml
- import_tasks: nodetemplates.yml
- import_tasks: clusterapi.yml
- import_tasks: gitops.yml

View File

@@ -0,0 +1,85 @@
- block:
- name: Check for existing templates on hypervisor
community.vmware.vmware_guest_info:
name: "{{ (item | basename | split('.'))[:-1] | join('.') }}"
register: existing_ova
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
ignore_errors: yes
- name: Parse OVA files for network mappings
ansible.builtin.shell:
cmd: govc import.spec -json {{ item }}
environment:
GOVC_INSECURE: '1'
GOVC_URL: "{{ vapp['hv.fqdn'] }}"
GOVC_USERNAME: "{{ vapp['hv.username'] }}"
GOVC_PASSWORD: "{{ vapp['hv.password'] }}"
register: ova_spec
when: existing_ova.results[index] is failed
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
loop_control:
index_var: index
- name: Deploy OVA templates on hypervisor
community.vmware.vmware_deploy_ovf:
cluster: "{{ vcenter_info.cluster }}"
datastore: "{{ vcenter_info.datastore }}"
folder: "{{ vcenter_info.folder }}"
name: "{{ (item | basename | split('.'))[:-1] | join('.') }}"
networks: "{u'{{ ova_spec.results[index].stdout | from_json | json_query('NetworkMapping[0].Name') }}':u'{{ vcenter_info.network }}'}"
allow_duplicates: no
power_on: false
ovf: "{{ item }}"
register: ova_deploy
when: existing_ova.results[index] is failed
loop: "{{ query('ansible.builtin.fileglob', '/opt/workloadcluster/node-templates/*.ova') | sort }}"
loop_control:
index_var: index
- name: Add vApp properties on deployed VM's
ansible.builtin.shell:
cmd: >-
npp-prepper \
--server "{{ vapp['hv.fqdn'] }}" \
--username "{{ vapp['hv.username'] }}" \
--password "{{ vapp['hv.password'] }}" \
vm \
--datacenter "{{ vcenter_info.datacenter }}" \
--portgroup "{{ vcenter_info.network }}" \
--name "{{ item.instance.hw_name }}"
when: existing_ova.results[index] is failed
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
- name: Create snapshot on deployed VM's
community.vmware.vmware_guest_snapshot:
folder: "{{ vcenter_info.folder }}"
name: "{{ item.instance.hw_name }}"
state: present
snapshot_name: "{{ ansible_date_time.iso8601_basic_short }}-base"
when: ova_deploy.results[index] is not skipped
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
- name: Mark deployed VM's as templates
community.vmware.vmware_guest:
name: "{{ item.instance.hw_name }}"
is_template: yes
when: ova_deploy.results[index] is not skipped
loop: "{{ ova_deploy.results }}"
loop_control:
index_var: index
label: "{{ item.item }}"
module_defaults:
group/vmware:
hostname: "{{ vapp['hv.fqdn'] }}"
validate_certs: no
username: "{{ vapp['hv.username'] }}"
password: "{{ vapp['hv.password'] }}"
datacenter: "{{ vcenter_info.datacenter }}"

View File

@@ -0,0 +1,40 @@
- block:
- name: Create dedicated kubeadm project within container registry
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/projects
method: POST
headers:
Authorization: "Basic {{ ('admin:' + vapp['metacluster.password']) | b64encode }}"
body:
project_name: kubeadm
public: true
storage_limit: 0
metadata:
enable_content_trust: 'false'
enable_content_trust_cosign: 'false'
auto_scan: 'true'
severity: none
prevent_vul: 'false'
public: 'true'
reuse_sys_cve_allowlist: 'true'
- name: Lookup kubeadm container images
ansible.builtin.set_fact:
kubeadm_images: "{{ lookup('ansible.builtin.file', '/opt/metacluster/cluster-api/imagelist').splitlines() }}"
- name: Copy kubeadm container images to dedicated project
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/projects/kubeadm/repositories/{{ ( item | regex_findall('([^:/]+)') )[-2] }}/artifacts?from=library/{{ item | replace('/', '%2F') | replace(':', '%3A') }}
method: POST
headers:
Authorization: "Basic {{ ('admin:' + vapp['metacluster.password']) | b64encode }}"
body:
from: "{{ item }}"
loop: "{{ kubeadm_images }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201, 409]
body_format: json

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
generators:
- git:
repoURL: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
revision: HEAD
directories:
- path: metacluster-applicationset/*
template:
metadata:
name: {% raw %}'{{ path.basename }}'{% endraw +%}
spec:
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
source:
repoURL: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
targetRevision: HEAD
path: {% raw %}'{{ path }}'{% endraw +%}
destination:
server: https://kubernetes.default.svc
namespace: default

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ _template.cluster.secret }}
namespace: argo-cd
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: {{ _template.cluster.name }}
server: {{ _template.cluster.url }}
config: |
{
"bearerToken": "{{ _template.cluster.token }}",
"tlsClientConfig": {
"insecure": true
}
}

View File

@@ -0,0 +1,42 @@
providers:
- name: "kubeadm"
url: "/opt/metacluster/cluster-api/bootstrap-kubeadm/{{ _template.version.base }}/bootstrap-components.yaml"
type: "BootstrapProvider"
- name: "cluster-api"
url: "/opt/metacluster/cluster-api/cluster-api/{{ _template.version.base }}/core-components.yaml"
type: "CoreProvider"
- name: "kubeadm"
url: "/opt/metacluster/cluster-api/control-plane-kubeadm/{{ _template.version.base }}/control-plane-components.yaml"
type: "ControlPlaneProvider"
- name: "vsphere"
url: "/opt/metacluster/cluster-api/infrastructure-vsphere/{{ _template.version.infrastructure_vsphere }}/infrastructure-components.yaml"
type: "InfrastructureProvider"
- name: "in-cluster"
url: "/opt/metacluster/cluster-api/ipam-in-cluster/{{ _template.version.ipam_incluster }}/ipam-components.yaml"
type: "IPAMProvider"
cert-manager:
url: "/opt/metacluster/cluster-api/cert-manager/{{ _template.version.cert_manager }}/cert-manager.yaml"
version: "{{ _template.version.cert_manager }}"
## -- Controller settings -- ##
VSPHERE_SERVER: "{{ _template.hv.fqdn }}"
VSPHERE_TLS_THUMBPRINT: "{{ _template.hv.tlsthumbprint }}"
VSPHERE_USERNAME: "{{ _template.hv.username }}"
VSPHERE_PASSWORD: "{{ _template.hv.password }}"
## -- Required workload cluster default settings -- ##
VSPHERE_DATACENTER: "{{ _template.hv.datacenter }}"
VSPHERE_DATASTORE: "{{ _template.hv.datastore }}"
VSPHERE_STORAGE_POLICY: ""
VSPHERE_NETWORK: "{{ _template.hv.network }}"
VSPHERE_RESOURCE_POOL: "{{ _template.hv.resourcepool }}"
VSPHERE_FOLDER: "{{ _template.hv.folder }}"
VSPHERE_TEMPLATE: "{{ _template.cluster.nodetemplate }}"
VSPHERE_SSH_AUTHORIZED_KEY: "{{ _template.cluster.publickey }}"
KUBERNETES_VERSION: "{{ _template.cluster.version }}"
CONTROL_PLANE_ENDPOINT_IP: "{{ _template.cluster.vip }}"
VIP_NETWORK_INTERFACE: ""
EXP_CLUSTER_RESOURCE_SET: "true"

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
annotations:
{{ _template.annotations }}
labels:
{{ _template.labels }}
data:
{% for kv_pair in _template.data %}
"{{ kv_pair.key }}": |
{{ kv_pair.value | indent(width=4, first=True) }}
{% endfor %}

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ _template.name }}-{{ _template.uid }}
namespace: {{ _template.namespace }}
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: ssh://git@gitea-ssh.gitea.svc.cluster.local/mc/GitOps.Config.git
name: {{ _template.name }}
insecure: 'true'
sshPrivateKey: |
{{ _template.privatekey }}

View File

@@ -0,0 +1,7 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.config }}

View File

@@ -0,0 +1,7 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
spec:
{{ _template.config }}

View File

@@ -0,0 +1,104 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cluster-template.yaml
patchesStrategicMerge:
- |-
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: '${CLUSTER_NAME}'
namespace: '${NAMESPACE}'
spec:
kubeadmConfigSpec:
clusterConfiguration:
imageRepository: registry.{{ _template.fqdn }}/kubeadm
- |-
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: '${CLUSTER_NAME}-md-0'
namespace: '${NAMESPACE}'
spec:
template:
spec:
clusterConfiguration:
imageRepository: registry.{{ _template.fqdn }}/kubeadm
- |-
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: '${CLUSTER_NAME}-md-0'
namespace: '${NAMESPACE}'
spec:
template:
spec:
files:
- encoding: base64
content: |
{{ _template.script.encoded }}
permissions: '0744'
owner: root:root
path: /root/network.sh
- content: |
network: {config: disabled}
owner: root:root
path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
- content: |
{{ _template.rootca | indent(width=14, first=False) | trim }}
owner: root:root
path: /usr/local/share/ca-certificates/root_ca.crt
patchesJson6902:
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
encoding: base64
content: |
{{ _template.script.encoded }}
owner: root:root
path: /root/network.sh
permissions: '0744'
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
network: {config: disabled}
owner: root:root
path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
- op: add
path: /spec/kubeadmConfigSpec/files/-
value:
content: |
{{ _template.rootca | indent(width=12, first=False) | trim }}
owner: root:root
path: /usr/local/share/ca-certificates/root_ca.crt
- target:
group: bootstrap.cluster.x-k8s.io
version: v1beta1
kind: KubeadmConfigTemplate
name: .*
patch: |-
{% for cmd in _template.runcmds %}
- op: add
path: /spec/template/spec/preKubeadmCommands/-
value: {{ cmd }}
{% endfor %}
- target:
group: controlplane.cluster.x-k8s.io
version: v1beta1
kind: KubeadmControlPlane
name: .*
patch: |-
{% for cmd in _template.runcmds %}
- op: add
path: /spec/kubeadmConfigSpec/preKubeadmCommands/-
value: {{ cmd }}
{% endfor %}

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ _template.name }}
namespace: {{ _template.namespace }}
data:
{% for kv_pair in _template.data %}
"{{ kv_pair.key }}": {{ kv_pair.value }}
{% endfor %}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ _template.account.name }}
namespace: {{ _template.account.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ _template.clusterrolebinding.name }}
subjects:
- kind: ServiceAccount
name: {{ _template.account.name }}
namespace: {{ _template.account.namespace }}
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,4 @@
- name: Disable crontab job
ansible.builtin.cron:
name: firstboot
state: absent

View File

@@ -0,0 +1,12 @@
- import_tasks: service.yml
- import_tasks: cron.yml
- name: Cleanup tempfile
ansible.builtin.file:
path: "{{ kubeconfig.path }}"
state: absent
when: kubeconfig.path is defined
# - name: Reboot host
# ansible.builtin.shell:
# cmd: systemctl reboot

View File

@@ -0,0 +1,30 @@
- name: Create tarball compression service
ansible.builtin.template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: root
group: root
mode: "{{ item.mode | default(omit) }}"
vars:
_template:
service:
name: compressTarballs
executable: /opt/firstboot/compresstarballs.sh
workingdir: /opt/metacluster/container-images/
loop:
- src: compresstarballs.j2
dest: "{{ _template.service.executable }}"
mode: o+x
- src: systemdunit.j2
dest: /etc/systemd/system/{{ _template.service.name }}.service
loop_control:
label: "{{ item.src }}"
- name: Enable/Start services
ansible.builtin.systemd:
name: "{{ item }}"
enabled: yes
state: started
loop:
- compressTarballs
- ttyConsoleMessage

View File

@@ -0,0 +1,24 @@
- name: Create volume group
community.general.lvg:
vg: longhorn_vg
pvs:
- /dev/sdb
pvresize: yes
- name: Create logical volume
community.general.lvol:
vg: longhorn_vg
lv: longhorn_lv
size: 100%VG
- name: Create filesystem
community.general.filesystem:
dev: /dev/mapper/longhorn_vg-longhorn_lv
fstype: ext4
- name: Mount dynamic disk
ansible.posix.mount:
path: /mnt/blockstorage
src: /dev/mapper/longhorn_vg-longhorn_lv
fstype: ext4
state: mounted

View File

@@ -0,0 +1,12 @@
- name: Import container images
ansible.builtin.command:
cmd: k3s ctr image import {{ item }} --digests
chdir: /opt/metacluster/container-images
register: import_result
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
# Probably should add a task before that ensures K3s node is fully initialized before starting imports; currently K3s goes away briefly during this loop
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
until: import_result is not failed

View File

@@ -0,0 +1,19 @@
- name: Set hostname
ansible.builtin.hostname:
name: "{{ vapp['guestinfo.hostname'] }}"
- name: Create netplan configuration file
ansible.builtin.template:
src: netplan.j2
dest: /etc/netplan/00-installer-config.yaml
vars:
_template:
macaddress: "{{ ansible_facts.default_ipv4.macaddress }}"
ipaddress: "{{ vapp['guestinfo.ipaddress'] }}"
prefixlength: "{{ vapp['guestinfo.prefixlength'] }}"
gateway: "{{ vapp['guestinfo.gateway'] }}"
dnsserver: "{{ vapp['guestinfo.dnsserver'] }}"
- name: Apply netplan configuration
ansible.builtin.shell:
cmd: /usr/sbin/netplan apply

View File

@@ -0,0 +1,13 @@
network:
version: 2
ethernets:
id0:
set-name: eth0
match:
macaddress: {{ _template.macaddress }}
addresses:
- {{ _template.ipaddress }}/{{ _template.prefixlength }}
gateway4: {{ _template.gateway }}
nameservers:
addresses:
- {{ _template.dnsserver }}

View File

@@ -0,0 +1 @@
- import_tasks: vcenter.yml

View File

@@ -0,0 +1,16 @@
- block:
- name: Check for vCenter connectivity
community.vmware.vmware_vcenter_settings_info:
schema: vsphere
register: vcenter_info
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
until: vcenter_info is not failed
module_defaults:
group/vmware:
hostname: "{{ vapp['hv.fqdn'] }}"
validate_certs: no
username: "{{ vapp['hv.username'] }}"
password: "{{ vapp['hv.password'] }}"

View File

@@ -0,0 +1,31 @@
- name: Create folder structure(s)
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /opt/firstboot
- name: Create tty console message service
ansible.builtin.template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: root
group: root
mode: "{{ item.mode | default(omit) }}"
vars:
_template:
service:
name: ttyConsoleMessage
executable: /opt/firstboot/tty.sh
workingdir: /tmp/
metacluster:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
vip: "{{ vapp['metacluster.vip'] }}"
loop:
- src: tty.j2
dest: "{{ _template.service.executable }}"
mode: o+x
- src: systemdunit.j2
dest: /etc/systemd/system/{{ _template.service.name }}.service
loop_control:
label: "{{ item.src }}"

View File

@@ -0,0 +1,39 @@
- name: Set root password
ansible.builtin.user:
name: root
password: "{{ vapp['metacluster.password'] | password_hash('sha512', 65534 | random(seed=vapp['guestinfo.hostname']) | string) }}"
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Save root SSH publickey
ansible.builtin.lineinfile:
path: /root/.ssh/authorized_keys
line: "{{ vapp['guestinfo.rootsshkey'] }}"
- name: Disable SSH password authentication
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regex: "{{ item.regex }}"
line: "{{ item.line }}"
state: "{{ item.state }}"
loop:
- regex: '^#PasswordAuthentication'
line: 'PasswordAuthentication no'
state: present
- regex: '^PasswordAuthentication yes'
line: 'PasswordAuthentication yes'
state: absent
loop_control:
label: "{{ '[' + item.regex + '] ' + item.state }}"
- name: Create dedicated SSH keypair
community.crypto.openssh_keypair:
path: /root/.ssh/git_rsa_id
register: gitops_sshkey
- name: Delete 'ubuntu' user
ansible.builtin.user:
name: ubuntu
state: absent
remove: yes

View File

@@ -0,0 +1,38 @@
- name: Store current ovfEnvironment
ansible.builtin.shell:
cmd: /usr/bin/vmtoolsd --cmd "info-get guestinfo.ovfEnv"
register: ovfenv
- name: Parse XML for MoRef ID
community.general.xml:
xmlstring: "{{ ovfenv.stdout }}"
namespaces:
ns: http://schemas.dmtf.org/ovf/environment/1
ve: http://www.vmware.com/schema/ovfenv
xpath: /ns:Environment
content: attribute
register: environment_attribute
- name: Store MoRef ID
ansible.builtin.set_fact:
moref_id: "{{ ((environment_attribute.matches[0].values() | list)[0].values() | list)[1] }}"
- name: Parse XML for vApp properties
community.general.xml:
xmlstring: "{{ ovfenv.stdout }}"
namespaces:
ns: http://schemas.dmtf.org/ovf/environment/1
xpath: /ns:Environment/ns:PropertySection/ns:Property
content: attribute
register: property_section
- name: Assign vApp properties to dictionary
ansible.builtin.set_fact:
vapp: >-
{{ vapp | default({}) | combine({
((item.values() | list)[0].values() | list)[0]:
((item.values() | list)[0].values() | list)[1]})
}}
loop: "{{ property_section.matches }}"
loop_control:
label: "{{ ((item.values() | list)[0].values() | list)[0] }}"

View File

@@ -0,0 +1,10 @@
#!/bin/bash
# Change working directory
pushd {{ _template.service.workingdir }}
# Compress *.tar files
if tar -czf image-tarballs.tgz *.tar --remove-files; then
# Disable systemd unit
systemctl disable {{ _template.service.name }}
fi

View File

@@ -0,0 +1,8 @@
mirrors:
{% for entry in _template.data %}
{{ entry }}:
endpoint:
- https://registry.{{ _template.hv.fqdn }}
rewrite:
"(.*)": "library/{{ entry }}/$1"
{% endfor %}

View File

@@ -0,0 +1,9 @@
[Unit]
Description={{ _template.service.name }}
[Service]
ExecStart={{ _template.service.executable }}
Nice=10
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,47 @@
#!/bin/bash
export TERM=linux
BGRN='\033[1;92m'
BGRY='\033[1;30m'
BBLU='\033[1;34m'
BRED='\033[1;91m'
BWHI='\033[1;97m'
CBLA='\033[?16;0;30c' # Hide blinking cursor
DFLT='\033[0m' # Reset colour
LCLR='\033[K' # Clear to end of line
PRST='\033[0;0H' # Reset cursor position
# COMPONENTS=('ca' 'ingress' 'storage' 'registry' 'git' 'gitops')
COMPONENTS=('storage' 'registry' 'git' 'gitops')
FQDN='{{ _template.metacluster.fqdn }}'
IPADDRESS='{{ _template.metacluster.vip }}'
I=60
while /bin/true; do
if [[ $I -gt 59 ]]; then
clear > /dev/tty1
I=0
else
I=$(( $I + 1 ))
fi
echo -e "${PRST}" > /dev/tty1
echo -e "\n\n\t${DFLT}To manage this appliance, please connect to one of the following:${LCLR}\n" > /dev/tty1
for c in "${COMPONENTS[@]}"; do
STATUS=$(curl -ks "https://${c}.${FQDN}" -o /dev/null -w '%{http_code}')
if [[ "${STATUS}" -eq "200" ]]; then
echo -e "\t [${BGRN}+${DFLT}] ${BBLU}https://${c}.${FQDN}${DFLT}${LCLR}" > /dev/tty1
else
echo -e "\t [${BRED}-${DFLT}] ${BBLU}https://${c}.${FQDN}${DFLT}${LCLR}" > /dev/tty1
fi
done
echo -e "\n\t${BGRY}Note that your DNS zone ${DFLT}must have${BGRY} respective records defined,\n\teach pointing to: ${DFLT}${IPADDRESS}${LCLR}" > /dev/tty1
echo -e "${CBLA}" > /dev/tty1
sleep 1
done

View File

@@ -0,0 +1,6 @@
playbook:
retries: 5
delays:
long: 60
medium: 30
short: 10

View File

@@ -1,10 +0,0 @@
---
- hosts: 127.0.0.1
connection: local
gather_facts: false
# become: true
roles:
- vapp
- network
- users
- cleanup

View File

@@ -1,20 +0,0 @@
- name: Disable crontab job
ansible.builtin.cron:
name: firstboot
state: absent
- name: Restore extra tty
ansible.builtin.lineinfile:
path: /etc/systemd/logind.conf
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^NAutoVTs=', line: '#NAutoVTs=6'}
- { regexp: '^ReserveVT=', line: '#ReserveVT=6'}
- name: Unmask getty@tty1 service
ansible.builtin.systemd:
name: getty@tty1
enabled: yes
masked: no
- name: Reboot host
ansible.builtin.shell:
cmd: /usr/sbin/reboot now

View File

@@ -1,10 +0,0 @@
- name: Set hostname
ansible.builtin.hostname:
name: "{{ ovfproperties['guestinfo.hostname'] }}"
- name: Create netplan configuration file
ansible.builtin.template:
src: netplan.j2
dest: /etc/netplan/00-installer-config.yaml
- name: Apply netplan configuration
ansible.builtin.shell:
cmd: /usr/sbin/netplan apply

View File

@@ -1,10 +0,0 @@
network:
version: 2
ethernets:
ens192:
addresses:
- {{ ovfproperties['guestinfo.ipaddress'] }}/{{ ovfproperties['guestinfo.prefixlength'] }}
gateway4: {{ ovfproperties['guestinfo.gateway'] }}
nameservers:
addresses:
- {{ ovfproperties['guestinfo.dnsserver'] }}

View File

@@ -1,25 +0,0 @@
- name: Set root password
ansible.builtin.user:
name: root
password: "{{ ovfproperties['guestinfo.rootpw'] | password_hash('sha512', 65534 | random(seed=ovfproperties['guestinfo.hostname']) | string) }}"
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Save root SSH publickey
ansible.builtin.lineinfile:
path: /root/.ssh/authorized_keys
line: "{{ ovfproperties['guestinfo.rootsshkey'] }}"
- name: Disable SSH password authentication
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regex: "{{ item.regex }}"
line: "{{ item.line }}"
state: "{{ item.state }}"
loop:
- { regex: '^#PasswordAuthentication', line: 'PasswordAuthentication no', state: present}
- { regex: '^PasswordAuthentication yes', line: 'PasswordAuthentication yes', state: absent}
- name: Delete 'ubuntu' user
ansible.builtin.user:
name: ubuntu
state: absent
remove: yes

View File

@@ -1,21 +0,0 @@
- name: Store current ovfEnvironment
ansible.builtin.shell:
cmd: /usr/bin/vmtoolsd --cmd "info-get guestinfo.ovfEnv"
register: ovfenv
- name: Parse XML for vApp properties
community.general.xml:
xmlstring: "{{ ovfenv.stdout }}"
namespaces:
ns: http://schemas.dmtf.org/ovf/environment/1
xpath: /ns:Environment/ns:PropertySection/ns:Property
content: attribute
register: ovfenv
- name: Assign vApp properties to dictionary
ansible.builtin.set_fact:
ovfproperties: >-
{{ ovfproperties | default({}) |
combine({((item.values() | list)[0].values() | list)[0]:
((item.values() | list)[0].values() | list)[1]})
}}
loop: "{{ ovfenv.matches }}"
no_log: true

View File

@@ -0,0 +1,26 @@
---
- hosts: 127.0.0.1
connection: local
gather_facts: true
vars_files:
- defaults.yml
- metacluster.yml
# become: true
roles:
- vapp
- network
- preflight
- users
- disks
- metacluster
# - workloadcluster
- tty
- cleanup
handlers:
- name: Apply manifests
kubernetes.core.k8s:
src: "{{ item }}"
state: present
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ query('ansible.builtin.fileglob', '/var/lib/rancher/k3s/server/manifests/*.yaml') | sort }}"
ignore_errors: yes

View File

@@ -0,0 +1,30 @@
- name: Configure fallback name resolution
ansible.builtin.lineinfile:
path: /etc/hosts
line: "{{ vapp['metacluster.vip'] }} {{ item + '.' + vapp['metacluster.fqdn'] }}"
state: present
loop:
# TODO: Make this list dynamic
- ca
- git
- gitops
- ingress
- registry
- storage
- name: Retrieve root CA certificate
ansible.builtin.uri:
url: https://ca.{{ vapp['metacluster.fqdn'] }}/roots
validate_certs: no
method: GET
status_code: [200, 201]
register: rootca_certificate
- name: Store root CA certificate
ansible.builtin.copy:
dest: /usr/local/share/ca-certificates/root_ca.crt
content: "{{ rootca_certificate.json.crts | list | join('\n') }}"
- name: Update certificate truststore
ansible.builtin.command:
cmd: update-ca-certificates

View File

@@ -0,0 +1,63 @@
- name: Store custom configuration files
ansible.builtin.copy:
dest: "{{ item.filename }}"
content: "{{ item.content }}"
loop:
- filename: /etc/rancher/k3s/config.yaml
content: |
kubelet-arg:
- "config=/etc/rancher/k3s/kubelet.config"
- filename: /etc/rancher/k3s/kubelet.config
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
shutdownGracePeriod: 180s
shtudownGracePeriodCriticalPods: 60s
loop_control:
label: "{{ item.filename }}"
- name: Gather service facts
ansible.builtin.service_facts:
# Module requires no attributes
- name: Install K3s
ansible.builtin.command:
cmd: ./install.sh
chdir: /opt/metacluster/k3s
environment:
INSTALL_K3S_SKIP_DOWNLOAD: 'true'
INSTALL_K3S_EXEC: "server --token {{ vapp['metacluster.token'] | trim }} --server https://{{ vapp['metacluster.vip'] }}:6443 --disable local-storage --config /etc/rancher/k3s/config.yaml"
when: ansible_facts.services['k3s.service'] is undefined
- name: Ensure API availability
ansible.builtin.uri:
url: https://{{ vapp['guestinfo.ipaddress'] }}:6443/livez?verbose
method: GET
validate_certs: no
status_code: [200, 401]
register: api_readycheck
until: api_readycheck.json.apiVersion is defined
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.medium }}"
- name: Install kubectl tab-completion
ansible.builtin.shell:
cmd: kubectl completion bash | tee /etc/bash_completion.d/kubectl
- name: Initialize tempfile
ansible.builtin.tempfile:
state: file
register: kubeconfig
- name: Retrieve kubeconfig
ansible.builtin.command:
cmd: kubectl config view --raw
register: kubectl_config
- name: Store kubeconfig in tempfile
ansible.builtin.copy:
dest: "{{ kubeconfig.path }}"
content: "{{ kubectl_config.stdout }}"
mode: 0600
no_log: true

View File

@@ -0,0 +1,9 @@
- import_tasks: init.yml
- import_tasks: registry.yml
- import_tasks: k3s.yml
- import_tasks: assets.yml
# - import_tasks: ingress.yml
- import_tasks: storage.yml
# - import_tasks: certauthority.yml
# - import_tasks: git.yml
# - import_tasks: gitops.yml

View File

@@ -0,0 +1,50 @@
- block:
- name: Push images to registry
ansible.builtin.shell:
cmd: >-
skopeo copy \
--insecure-policy \
--dest-tls-verify=false \
--dest-creds admin:{{ vapp['metacluster.password'] }} \
docker-archive:./{{ item | basename }} \
docker://registry.{{ vapp['metacluster.fqdn'] }}/library/$( \
skopeo list-tags \
--insecure-policy \
docker-archive:./{{ item | basename }} | \
jq -r '.Tags[0]')
chdir: /opt/metacluster/container-images/
register: push_result
loop: "{{ query('ansible.builtin.fileglob', '/opt/metacluster/container-images/*.tar') | sort }}"
loop_control:
label: "{{ item | basename }}"
retries: "{{ playbook.retries }}"
delay: "{{ playbook.delays.short }}"
until: push_result is not failed
- name: Get all stored container images (=artifacts)
ansible.builtin.uri:
url: https://registry.{{ vapp['metacluster.fqdn'] }}/api/v2.0/search?q=library
method: GET
register: registry_artifacts
- name: Get source registries of all artifacts
ansible.builtin.set_fact:
source_registries: "{{ (source_registries | default([]) + [(item | split('/'))[1]]) | unique | sort }}"
loop: "{{ registry_artifacts.json.repository | json_query('[*].repository_name') }}"
- name: Configure K3s node for private registry
ansible.builtin.template:
dest: /etc/rancher/k3s/registries.yaml
src: registries.j2
vars:
_template:
data: "{{ source_registries }}"
hv:
fqdn: "{{ vapp['metacluster.fqdn'] }}"
module_defaults:
ansible.builtin.uri:
validate_certs: no
status_code: [200, 201, 401]
body_format: json

View File

@@ -0,0 +1,14 @@
- name: Increase replicas for each volume
kubernetes.core.k8s:
api_version: longhorn.io/v1beta2
kind: volume
name: "{{ item.metadata.name }}"
namespace: longhorn-system
state: patched
definition: |
spec:
numberOfReplicas: {{ lookup('kubernetes.core.k8s', kind='node', kubeconfig=(kubeconfig.path)) | length | int }}
kubeconfig: "{{ kubeconfig.path }}"
loop: "{{ lookup('kubernetes.core.k8s', api_version='longhorn.io/v1beta2', kind='volume', namespace='longhorn-system', kubeconfig=(kubeconfig.path)) }}"
loop_control:
label: "{{ item.metadata.name }}"

View File

@@ -0,0 +1,2 @@
- import_tasks: vcenter.yml
- import_tasks: metacluster.yml

View File

@@ -0,0 +1,6 @@
- name: Check for metacluster connectivity
ansible.builtin.uri:
url: https://{{ vapp['metacluster.vip'] }}:6443/livez?verbose
method: GET
validate_certs: no
status_code: [200, 401]

View File

@@ -2,6 +2,7 @@
ansible.builtin.file: ansible.builtin.file:
path: /opt/firstboot path: /opt/firstboot
state: directory state: directory
- name: Create firstboot script file - name: Create firstboot script file
ansible.builtin.template: ansible.builtin.template:
src: firstboot.j2 src: firstboot.j2
@@ -9,18 +10,25 @@
owner: root owner: root
group: root group: root
mode: o+x mode: o+x
- name: Create @reboot crontab job - name: Create @reboot crontab job
ansible.builtin.cron: ansible.builtin.cron:
name: firstboot name: firstboot
special_time: reboot special_time: reboot
job: "/opt/firstboot/firstboot.sh" job: "/opt/firstboot/firstboot.sh >/dev/tty1 2>&1"
- name: Copy payload folder
- name: Copy payload folder (common)
ansible.builtin.copy: ansible.builtin.copy:
src: ansible_payload/ src: ansible_payload/common/
dest: /opt/firstboot/ansible/
owner: root
group: root
mode: '0644'
- name: Copy payload folder (per appliancetype)
ansible.builtin.copy:
src: ansible_payload/{{ appliancetype }}/
dest: /opt/firstboot/ansible/ dest: /opt/firstboot/ansible/
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
- name: Install ansible-galaxy collection
ansible.builtin.shell:
cmd: ansible-galaxy collection install community.general

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/bin/bash
# Apply firstboot configuration w/ ansible # Apply firstboot configuration w/ ansible
/usr/local/bin/ansible-playbook /opt/firstboot/ansible/playbook.yml | tee -a /var/log/firstboot.log > /dev/tty1 /usr/local/bin/ansible-playbook -e "PYTHONUNBUFFERED=1" /opt/firstboot/ansible/playbook.yml | tee -a /var/log/firstboot.log > /dev/tty1 2>&1

View File

@@ -1,6 +0,0 @@
- name: Install ansible (w/ dependencies)
ansible.builtin.pip:
name: "{{ item }}"
executable: pip3
state: latest
loop: "{{ pip_packages }}"

View File

@@ -3,6 +3,7 @@
name: cloud-init name: cloud-init
state: absent state: absent
purge: yes purge: yes
- name: Delete cloud-init files - name: Delete cloud-init files
ansible.builtin.file: ansible.builtin.file:
path: "{{ item }}" path: "{{ item }}"

View File

@@ -15,6 +15,3 @@
- name: Install packages - name: Install packages
import_tasks: packages.yml import_tasks: packages.yml
- name: Install ansible
import_tasks: ansible.yml

View File

@@ -1,14 +1,46 @@
- name: Configure 'needrestart' package
ansible.builtin.lineinfile:
path: /etc/needrestart/needrestart.conf
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- regexp: "^#\\$nrconf\\{restart\\} = 'i';"
line: "$nrconf{restart} = 'a';"
- regexp: "^#\\$nrconf\\{kernelhints\\} = -1;"
line: "$nrconf{kernelhints} = -1;"
loop_control:
label: "{{ item.line }}"
- name: Install additional packages - name: Install additional packages
ansible.builtin.apt: ansible.builtin.apt:
name: "{{ item }}" pkg: "{{ packages.apt }}"
state: latest state: latest
update_cache: yes update_cache: yes
loop: "{{ packages }}" install_recommends: no
- name: Upgrade all packages - name: Upgrade all packages
ansible.builtin.apt: ansible.builtin.apt:
name: "*" name: '*'
state: latest state: latest
update_cache: yes update_cache: yes
- name: Install additional python packages
ansible.builtin.pip:
name: "{{ item }}"
executable: pip3
state: latest
loop: "{{ packages.pip }}"
- name: Create folder
ansible.builtin.file:
path: /etc/ansible
state: directory
- name: Configure Ansible defaults
ansible.builtin.template:
src: ansible.j2
dest: /etc/ansible/ansible.cfg
- name: Cleanup - name: Cleanup
ansible.builtin.apt: ansible.builtin.apt:
autoremove: yes autoremove: yes

View File

@@ -3,14 +3,17 @@
name: snapd name: snapd
state: absent state: absent
purge: yes purge: yes
- name: Delete leftover files - name: Delete leftover files
ansible.builtin.file: ansible.builtin.file:
path: /root/snap path: /root/snap
state: absent state: absent
- name: Hold snapd package - name: Hold snapd package
ansible.builtin.dpkg_selections: ansible.builtin.dpkg_selections:
name: snapd name: snapd
selection: hold selection: hold
- name: Reload systemd unit configurations - name: Reload systemd unit configurations
ansible.builtin.systemd: ansible.builtin.systemd:
daemon_reload: yes daemon_reload: yes

View File

@@ -4,8 +4,13 @@
regexp: "{{ item.regexp }}" regexp: "{{ item.regexp }}"
line: "{{ item.line }}" line: "{{ item.line }}"
loop: loop:
- { regexp: '^#NAutoVTs=', line: 'NAutoVTs=1'} - regexp: '^#NAutoVTs='
- { regexp: '^#ReserveVT=', line: 'ReserveVT=11'} line: 'NAutoVTs=1'
- regexp: '^#ReserveVT='
line: 'ReserveVT=11'
loop_control:
label: "{{ item.line }}"
- name: Mask getty@tty1 service - name: Mask getty@tty1 service
ansible.builtin.systemd: ansible.builtin.systemd:
name: getty@tty1 name: getty@tty1

View File

@@ -0,0 +1,2 @@
[defaults]
callbacks_enabled = ansible.posix.profile_tasks

View File

@@ -1,11 +1,13 @@
packages: packages:
- jq apt:
# (python3-*) Dependency for installation of Ansible - jq
- python3-pip - python3-pip
- python3-setuptools pip:
- python3-wheel # - ansible-core<2.14.0
- ansible-core
pip_packages: - jinja2
- pip - lxml
- ansible-core - markupsafe
- lxml - pip
- setuptools
- wheel

View File

@@ -0,0 +1,263 @@
platform:
k3s:
version: v1.26.0+k3s1
gitops:
repository:
uri: https://code.spamasaurus.com/djpbessems/GitOps.MetaCluster.git
# revision: v0.1.0
revision: HEAD
packaged_components:
- name: traefik
namespace: kube-system
config: |2
additionalArguments:
- "--certificatesResolvers.stepca.acme.caserver=https://step-certificates.step-ca.svc.cluster.local/acme/acme/directory"
- "--certificatesResolvers.stepca.acme.email=admin"
- "--certificatesResolvers.stepca.acme.storage=/data/acme.json"
- "--certificatesResolvers.stepca.acme.tlsChallenge=true"
- "--certificatesresolvers.stepca.acme.certificatesduration=24"
deployment:
initContainers:
- name: volume-permissions
image: busybox:1
command: ["sh", "-c", "touch /data/acme.json && chmod -Rv 600 /data/* && chown 65532:65532 /data/acme.json"]
volumeMounts:
- name: data
mountPath: /data
globalArguments: []
ingressRoute:
dashboard:
enabled: false
persistence:
enabled: true
ports:
ssh:
port: 8022
protocol: TCP
web:
redirectTo: websecure
websecure:
tls:
certResolver: stepca
helm_repositories:
- name: argo
url: https://argoproj.github.io/argo-helm
- name: gitea-charts
url: https://dl.gitea.io/charts/
- name: harbor
url: https://helm.goharbor.io
- name: jetstack
url: https://charts.jetstack.io
- name: longhorn
url: https://charts.longhorn.io
- name: smallstep
url: https://smallstep.github.io/helm-charts/
components:
argo-cd:
helm:
# version: 4.9.7 # (= ArgoCD v2.4.2)
version: 5.14.1 # (= ArgoCD v2.5.2)
chart: argo/argo-cd
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
configs:
secret:
argocdServerAdminPassword: "{{ vapp['metacluster.password'] | password_hash('bcrypt') }}"
server:
extraArgs:
- --insecure
ingress:
enabled: true
hosts:
- gitops.{{ vapp['metacluster.fqdn'] }}
cert-manager:
helm:
version: 1.10.1
chart: jetstack/cert-manager
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
# chart_values: !unsafe |
# installCRDs: true
clusterapi:
management:
version:
# Must match the version referenced at `dependencies.static_binaries[.filename==clusterctl].url`
base: v1.3.2
# Must match the version referenced at `components.cert-manager.helm.version`
cert_manager: v1.10.1
infrastructure_vsphere: v1.5.1
ipam_incluster: v0.1.0-alpha.1
workload:
version:
calico: v3.24.5
# k8s: v1.25.5
k8s: v1.23.5
node_template:
# Refer to `https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/v1.3.5/README.md#kubernetes-versions-with-published-ovas` for a list of supported node templates
# url: https://storage.googleapis.com/capv-templates/v1.25.5/ubuntu-2004-kube-v1.25.5.ova
url: https://storage.googleapis.com/capv-images/release/v1.23.5/ubuntu-2004-kube-v1.23.5.ova
gitea:
helm:
version: v6.0.3 # (= Gitea v1.17.3)
chart: gitea-charts/gitea
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | sed '/:/!s/$/:latest/'
chart_values: !unsafe |
gitea:
admin:
username: administrator
password: "{{ vapp['metacluster.password'] }}"
email: admin@{{ vapp['metacluster.fqdn'] }}
config:
server:
OFFLINE_MODE: true
PROTOCOL: http
ROOT_URL: https://git.{{ vapp['metacluster.fqdn'] }}/
image:
pullPolicy: IfNotPresent
ingress:
enabled: true
hosts:
- host: git.{{ vapp['metacluster.fqdn'] }}
paths:
- path: /
pathType: Prefix
service:
ssh:
type: ClusterIP
port: 22
clusterIP:
harbor:
helm:
version: 1.10.2 # (= Harbor v2.6.2)
chart: harbor/harbor
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sort -u | awk '!/ /'
chart_values: !unsafe |
expose:
ingress:
annotations: {}
hosts:
core: registry.{{ vapp['metacluster.fqdn'] }}
tls:
certSource: none
enabled: false
externalURL: https://registry.{{ vapp['metacluster.fqdn'] }}
harborAdminPassword: "{{ vapp['metacluster.password'] }}"
notary:
enabled: false
persistence:
persistentVolumeClaim:
registry:
size: 25Gi
kubevip:
# Must match the version referenced at `dependencies.container_images`
version: v0.5.8
longhorn:
helm:
version: 1.4.0
chart: longhorn/longhorn
parse_logic: cat values.yaml | yq eval '.. | select(has("repository")) | .repository + ":" + .tag'
chart_values: !unsafe |
defaultSettings:
allowNodeDrainWithLastHealthyReplica: true
defaultDataPath: /mnt/blockstorage
defaultReplicaCount: 1
ingress:
enabled: true
host: storage.{{ vapp['metacluster.fqdn'] }}
persistence:
defaultClassReplicaCount: 1
step-certificates:
helm:
# version: 1.18.2+20220324
version: 1.23.0
chart: smallstep/step-certificates
parse_logic: helm template . | yq --no-doc eval '.. | .image? | select(.)' | sed '/:/!s/$/:latest/' | sort -u
chart_values: !unsafe |
ca:
bootstrap:
postInitHook: |
echo '{{ vapp["metacluster.password"] }}' > ~/pwfile
step ca provisioner add acme \
--type ACME \
--password-file=~/pwfile \
--force-cn
rm ~/pwfile
dns: ca.{{ vapp['metacluster.fqdn'] }},step-certificates.step-ca.svc.cluster.local,127.0.0.1
password: "{{ vapp['metacluster.password'] }}"
provisioner:
name: admin
password: "{{ vapp['metacluster.password'] }}"
inject:
secrets:
ca_password: "{{ vapp['metacluster.password'] | b64encode }}"
provisioner_password: "{{ vapp['metacluster.password'] | b64encode }}"
service:
targetPort: 9000
dependencies:
ansible_galaxy_collections:
- ansible.posix
- ansible.utils
- community.crypto
- community.general
- community.vmware
- kubernetes.core
container_images:
# This should match the image tag referenced at `platform.packaged_components[.name==traefik].config`
- busybox:1
- ghcr.io/kube-vip/kube-vip:v0.5.8
# The following list is generated by running the following commands:
# $ clusterctl init -i vsphere:<version> [...]
# $ clusterctl generate cluster <name> [...] | yq eval '.data.data' | yq --no-doc eval '.. | .image? | select(.)' | sort -u
- gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.18.1
- gcr.io/cloud-provider-vsphere/csi/release/driver:v2.1.0
- gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.1.0
- quay.io/k8scsi/csi-attacher:v3.0.0
- quay.io/k8scsi/csi-node-driver-registrar:v2.0.1
- quay.io/k8scsi/csi-provisioner:v2.0.0
- quay.io/k8scsi/livenessprobe:v2.1.0
static_binaries:
- filename: clusterctl
url: https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.3.2/clusterctl-linux-amd64
- filename: govc
url: https://github.com/vmware/govmomi/releases/download/v0.29.0/govc_Linux_x86_64.tar.gz
archive: compressed
- filename: helm
url: https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz
archive: compressed
extra_opts: --strip-components=1
- filename: npp-prepper
url: https://code.spamasaurus.com/api/packages/djpbessems/generic/npp-prepper/v0.4.5/npp-prepper
- filename: skopeo
url: https://code.spamasaurus.com/api/packages/djpbessems/generic/skopeo/v1.11.0-dev/skopeo
- filename: step
url: https://dl.step.sm/gh-release/cli/gh-release-header/v0.23.0/step_linux_0.23.0_amd64.tar.gz
archive: compressed
extra_opts: --strip-components=2
- filename: yq
url: http://github.com/mikefarah/yq/releases/download/v4.30.5/yq_linux_amd64
packages:
apt:
- lvm2
pip:
- jmespath
- kubernetes
- netaddr
- passlib
- pyvmomi

47
packer/build.pkr.hcl Normal file
View File

@@ -0,0 +1,47 @@
packer {
required_plugins {
}
}
build {
source "vsphere-iso.ubuntu" {
name = "bootstrap"
vm_name = "${var.vm_name}-bootstrap"
}
source "vsphere-iso.ubuntu" {
name = "upgrade"
vm_name = "${var.vm_name}-upgrade"
}
provisioner "ansible" {
pause_before = "2m30s"
playbook_file = "ansible/playbook.yml"
user = "ubuntu"
ansible_env_vars = [
"ANSIBLE_CONFIG=ansible/ansible.cfg",
"PYTHONUNBUFFERED=1"
]
use_proxy = "false"
extra_arguments = [
"--extra-vars", "appliancetype=${source.name}",
"--extra-vars", "ansible_ssh_pass=${var.ssh_password}"//,
// "--extra-vars", "repo_username=${var.repo_username}",
// "--extra-vars", "repo_password=${var.repo_password}"
]
}
post-processor "shell-local" {
inline = [
"pwsh -command \"& scripts/Update-OvfConfiguration.ps1 \\",
" -ApplianceType '${source.name}' \\",
" -OVFFile '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.ovf' \"",
"pwsh -file scripts/Update-Manifest.ps1 \\",
" -ManifestFileName '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.mf'",
"ovftool --acceptAllEulas --allowExtraConfig --overwrite \\",
" '/scratch/airgapped-k8s/${var.vm_name}-${source.name}.ovf' \\",
" /output/airgapped-k8s.${source.name}.ova"
]
}
}

View File

@@ -1,4 +1,5 @@
iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2020.04/ubuntu-20.04.4-live-server-amd64.iso" iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04.1-live-server-amd64.iso"
iso_checksum = "sha256:28CCDB56450E643BAD03BB7BCF7507CE3D8D90E8BF09E38F6BD9AC298A98EAAD" iso_checksum = "sha256:10F19C5B2B8D6DB711582E0E27F5116296C34FE4B313BA45F9B201A5007056CB"
// iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2020.04/ubuntu-20.04.2-live-server-amd64.iso"
// iso_checksum = "sha256:D1F2BF834BBE9BB43FAF16F9BE992A6F3935E65BE0EDECE1DEE2AA6EB1767423" // iso_url = "sn.itch.fyi/Repository/iso/Canonical/Ubuntu%20Server%2022.04/ubuntu-22.04-live-server-amd64.iso"
// iso_checksum = "sha256:84AEAF7823C8C61BAA0AE862D0A06B03409394800000B3235854A6B38EB4856F"

61
packer/sources.pkr.hcl Normal file
View File

@@ -0,0 +1,61 @@
source "vsphere-iso" "ubuntu" {
vcenter_server = var.vcenter_server
username = var.vsphere_username
password = var.vsphere_password
insecure_connection = "true"
datacenter = var.vsphere_datacenter
cluster = var.vsphere_cluster
host = var.vsphere_host
folder = var.vsphere_folder
datastore = var.vsphere_datastore
guest_os_type = "ubuntu64Guest"
boot_order = "disk,cdrom"
boot_command = [
"e<down><down><down><end>",
" autoinstall ds=nocloud;",
"<F10>"
]
boot_wait = "2s"
communicator = "ssh"
ssh_username = "ubuntu"
ssh_password = var.ssh_password
ssh_timeout = "20m"
ssh_handshake_attempts = "100"
ssh_pty = true
CPUs = 4
RAM = 8192
network_adapters {
network = var.vsphere_network
network_card = "vmxnet3"
}
storage {
disk_size = 76800
disk_thin_provisioned = true
}
disk_controller_type = ["pvscsi"]
usb_controller = ["xhci"]
cd_files = [
"packer/preseed/UbuntuServer22.04/user-data",
"packer/preseed/UbuntuServer22.04/meta-data"
]
cd_label = "cidata"
iso_url = local.iso_authenticatedurl
iso_checksum = var.iso_checksum
shutdown_command = "echo '${var.ssh_password}' | sudo -S shutdown -P now"
shutdown_timeout = "5m"
remove_cdrom = true
export {
images = false
output_directory = "/scratch/airgapped-k8s"
}
}

View File

@@ -1,100 +0,0 @@
packer {
required_plugins {
}
}
source "vsphere-iso" "ubuntuserver" {
vcenter_server = var.vcenter_server
username = var.vsphere_username
password = var.vsphere_password
insecure_connection = "true"
vm_name = "${var.vm_guestos}-${var.vm_name}"
datacenter = var.vsphere_datacenter
cluster = var.vsphere_cluster
host = var.vsphere_host
folder = var.vsphere_folder
datastore = var.vsphere_datastore
guest_os_type = "ubuntu64Guest"
boot_order = "disk,cdrom"
boot_command = [
"<enter><wait2><enter><wait><f6><esc><wait>",
" autoinstall<wait2> ds=nocloud;",
"<wait><enter>"
]
boot_wait = "2s"
communicator = "ssh"
ssh_username = "ubuntu"
ssh_password = var.ssh_password
ssh_timeout = "20m"
ssh_handshake_attempts = "100"
ssh_pty = true
CPUs = 2
RAM = 4096
network_adapters {
network = var.vsphere_network
network_card = "vmxnet3"
}
storage {
disk_size = 20480
disk_thin_provisioned = true
}
disk_controller_type = ["pvscsi"]
usb_controller = ["xhci"]
cd_files = [
"packer/preseed/UbuntuServer20.04/user-data",
"packer/preseed/UbuntuServer20.04/meta-data"
]
cd_label = "cidata"
iso_url = local.iso_authenticatedurl
iso_checksum = var.iso_checksum
shutdown_command = "echo '${var.ssh_password}' | sudo -S shutdown -P now"
shutdown_timeout = "5m"
export {
images = false
output_directory = "/scratch/ubuntuserver"
}
remove_cdrom = true
}
build {
sources = [
"source.vsphere-iso.ubuntuserver"
]
provisioner "ansible" {
only = ["vsphere-iso.ubuntuserver"]
playbook_file = "ansible/playbook.yml"
user = "ubuntu"
ansible_env_vars = [
"ANSIBLE_CONFIG=ansible/ansible.cfg"
]
use_proxy = "false"
extra_arguments = [
"--extra-vars", "ansible_ssh_pass=${var.ssh_password}"
]
}
post-processor "shell-local" {
only = ["vsphere-iso.ubuntuserver"]
inline = [
"pwsh -command \"& scripts/Update-OvfConfiguration.ps1 \\",
" -OVFFile '/scratch/ubuntuserver/${var.vm_guestos}-${var.vm_name}.ovf' \\",
" -Parameter @{'appliance.name'='${var.vm_guestos}';'appliance.version'='${var.vm_name}'}\"",
"pwsh -file scripts/Update-Manifest.ps1 \\",
" -ManifestFileName '/scratch/ubuntuserver/${var.vm_guestos}-${var.vm_name}.mf'",
"ovftool --acceptAllEulas --allowExtraConfig --overwrite \\",
" '/scratch/ubuntuserver/${var.vm_guestos}-${var.vm_name}.ovf' \\",
" /output/Ubuntu-Server-20.04.ova"
]
}
}

View File

@@ -14,7 +14,6 @@ variable "vsphere_datastore" {}
variable "vsphere_network" {} variable "vsphere_network" {}
variable "vm_name" {} variable "vm_name" {}
variable "vm_guestos" {}
variable "ssh_password" { variable "ssh_password" {
sensitive = true sensitive = true
} }

View File

@@ -6,4 +6,4 @@ vsphere_host = "bv11-esx.bessems.lan"
vsphere_datastore = "ESX00.SSD01" vsphere_datastore = "ESX00.SSD01"
vsphere_folder = "/Packer" vsphere_folder = "/Packer"
vsphere_templatefolder = "/Templates" vsphere_templatefolder = "/Templates"
vsphere_network = "LAN" vsphere_network = "LAN"

View File

@@ -0,0 +1,197 @@
DeploymentConfigurations:
- Id: cp1w1
Label: 'Workload-cluster: 1 control-plane node/1 worker node'
Description: 1 control-plane node/1 worker node
- Id: cp1w2
Label: 'Workload-cluster: 1 control-plane node/2 worker nodes'
Description: 1 control-plane node/2 worker nodes
DynamicDisks:
- Description: Longhorn persistent storage
UnitSize: GB
Constraints:
Minimum: 100
Maximum: ''
PropertyCategory: 2
PropertyCategories:
- Name: 0) Deployment information
ProductProperties:
- Key: deployment.type
Type: string
Value:
- cp1w1
- cp1w2
UserConfigurable: false
- Name: 1) Meta-cluster
ProductProperties:
- Key: metacluster.fqdn
Type: string(1..)
Label: Meta-cluster FQDN*
Description: Respective subdomains will be available for each component (e.g. storage.example.org); this address should already be configured as a wildcard record within your DNS zone.
DefaultValue: meta.k8s.cluster
Configurations: '*'
UserConfigurable: true
- key: metacluster.vip
Type: ip
Label: Meta-cluster virtual IP*
Description: Meta-cluster control plane endpoint virtual IP
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- key: metacluster.token
Type: string(1..)
Label: K3s install token*
Description: Auto-generated; this value is used to join future new nodes to the metacluster after deployment
DefaultValue: '{{ metacluster.token }}'
Configurations: '*'
UserConfigurable: true
- Name: 2) Meta-cluster initial node
ProductProperties:
- Key: guestinfo.hostname
Type: string(1..15)
Label: Hostname*
Description: ''
DefaultValue: 'meta-{{ hostname.suffix }}'
Configurations: '*'
UserConfigurable: true
- Key: metacluster.password
Type: password(7..)
Label: Appliance password*
Description: 'Initial password for respective administrator accounts within each component'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ipaddress
Type: ip
Label: IP Address*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.prefixlength
Type: int(8..32)
Label: Subnet prefix length*
Description: ''
DefaultValue: '24'
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.dnsserver
Type: ip
Label: DNS server*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.gateway
Type: ip
Label: Gateway*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ntpserver
Type: string(1..)
Label: Time server*
Description: A comma-separated list of timeservers
DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
Configurations: '*'
UserConfigurable: true
- Name: 3) Workload-cluster
ProductProperties:
- Key: workloadcluster.name
Type: string(1..15)
Label: Workload-cluster name*
Description: ''
DefaultValue: 'workload-{{ hostname.suffix }}'
Configurations: '*'
UserConfigurable: true
- Key: workloadcluster.vip
Type: ip
Label: Workload-cluster virtual IP*
Description: Workload-cluster control plane endpoint virtual IP
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: ippool.startip
Type: ip
Label: Workload-cluster IP-pool start IP*
Description: All nodes for the workload-cluster will be provisioned within this IP pool
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: ippool.endip
Type: ip
Label: Workload-cluster IP-pool end IP*
Description: All nodes for the workload-cluster will be provisioned within this IP pool
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Name: 4) Common
ProductProperties:
- Key: guestinfo.rootsshkey
Type: password(1..)
Label: SSH public key*
Description: Authentication for any node (meta-cluster *and* workloadcluster); this line should start with 'ssh-rsa AAAAB3N'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Name: 5) Hypervisor
ProductProperties:
- Key: hv.fqdn
Type: string(1..)
Label: vCenter FQDN/IP-address*
Description: The address of the vCenter instance which this bootstrap appliance will interact with for provisioning new VM's.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: hv.username
Type: string(1..)
Label: vCenter username*
Description: The username which this bootstrap appliance will authenticate with to the vCenter instance.
DefaultValue: 'administrator@vsphere.local'
Configurations: '*'
UserConfigurable: true
- Key: hv.password
Type: password(1..)
Label: vCenter password*
Description: The password which this bootstrap appliance will authenticate with to the vCenter instance.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
---
Variables:
- Name: hostname.suffix
Expression: |
(-join ((48..57) + (97..122) | Get-Random -Count 5 | % {[char]$_})).ToLower()
- Name: metacluster.token
Expression: |
(New-Guid).Guid -replace '-', ''

View File

@@ -1,6 +1,9 @@
#Requires -Modules 'powershell-yaml' #Requires -Modules 'powershell-yaml'
[CmdletBinding()] [CmdletBinding()]
Param( Param(
[Parameter(Mandatory)]
[ValidateSet('Bootstrap', 'Upgrade')]
[string]$ApplianceType,
[Parameter(Mandatory)] [Parameter(Mandatory)]
[ValidateScript({ [ValidateScript({
If (Test-Path($_)) { If (Test-Path($_)) {
@@ -14,7 +17,7 @@ Param(
) )
$GetContentSplat = @{ $GetContentSplat = @{
Path = "$($PSScriptRoot)\$($MyInvocation.MyCommand)".Replace('.ps1', ".yml") Path = "$($PSScriptRoot)\$($MyInvocation.MyCommand)".Replace('.ps1', ".$($ApplianceType.ToLower()).yml")
Raw = $True Raw = $True
} }
$RawContent = Get-Content @GetContentSplat $RawContent = Get-Content @GetContentSplat
@@ -102,7 +105,7 @@ ForEach ($Disk in $OVFConfig.DynamicDisks) {
$XML.SelectSingleNode("//ns:VirtualHardwareSection/ns:Item/rasd:InstanceID[.='$($HighestInstanceID)']", $NS).ParentNode $XML.SelectSingleNode("//ns:VirtualHardwareSection/ns:Item/rasd:InstanceID[.='$($HighestInstanceID)']", $NS).ParentNode
) )
$OVFConfig.PropertyCategories[0].ProductProperties += @{ $OVFConfig.PropertyCategories[@([int]$Disk.PropertyCategory, 0)[![boolean]$Disk.PropertyCategory]].ProductProperties += @{
Key = "vmconfig.disksize.$($DiskId)" Key = "vmconfig.disksize.$($DiskId)"
Type = If ([boolean]$Disk.Constraints.Minimum -or [boolean]$Disk.Constraints.Maximum) { Type = If ([boolean]$Disk.Constraints.Minimum -or [boolean]$Disk.Constraints.Maximum) {
"Int($($Disk.Constraints.Minimum)..$($Disk.Constraints.Maximum))" "Int($($Disk.Constraints.Minimum)..$($Disk.Constraints.Maximum))"
@@ -124,20 +127,20 @@ If ($OVFConfig.DeploymentConfigurations.Count -gt 0) {
$XMLSectionInfo = $XML.CreateElement('Info', $XML.DocumentElement.xmlns) $XMLSectionInfo = $XML.CreateElement('Info', $XML.DocumentElement.xmlns)
$XMLSectionInfo.InnerText = 'Deployment Type' $XMLSectionInfo.InnerText = 'Deployment Type'
[void]$XMLSection.AppendChild($XMLSectionInfo) [void]$XMLSection.AppendChild($XMLSectionInfo)
ForEach ($Configuration in $OVFConfig.DeploymentConfigurations) { ForEach ($Configuration in $OVFConfig.DeploymentConfigurations) {
$XMLConfig = $XML.CreateElement('Configuration', $XML.DocumentElement.xmlns) $XMLConfig = $XML.CreateElement('Configuration', $XML.DocumentElement.xmlns)
[void]$XMLConfig.SetAttribute('id', $NS.LookupNamespace('ovf'), $Configuration.Id) [void]$XMLConfig.SetAttribute('id', $NS.LookupNamespace('ovf'), $Configuration.Id)
$XMLConfigLabel = $XML.CreateElement('Label', $XML.DocumentElement.xmlns) $XMLConfigLabel = $XML.CreateElement('Label', $XML.DocumentElement.xmlns)
$XMLConfigLabel.InnerText = $Configuration.Label $XMLConfigLabel.InnerText = $Configuration.Label
$XMLConfigDescription = $XML.CreateElement('Description', $XML.DocumentElement.xmlns) $XMLConfigDescription = $XML.CreateElement('Description', $XML.DocumentElement.xmlns)
$XMLConfigDescription.InnerText = $Configuration.Description $XMLConfigDescription.InnerText = $Configuration.Description
[void]$XMLConfig.AppendChild($XMLConfigLabel) [void]$XMLConfig.AppendChild($XMLConfigLabel)
[void]$XMLConfig.AppendChild($XMLConfigDescription) [void]$XMLConfig.AppendChild($XMLConfigDescription)
[void]$XMLSection.AppendChild($XMLConfig) [void]$XMLSection.AppendChild($XMLConfig)
} }
[void]$XML.SelectSingleNode('//ns:Envelope', $NS).InsertAfter($XMLSection, $XML.SelectSingleNode('//ns:NetworkSection', $NS)) [void]$XML.SelectSingleNode('//ns:Envelope', $NS).InsertAfter($XMLSection, $XML.SelectSingleNode('//ns:NetworkSection', $NS))
@@ -213,7 +216,7 @@ ForEach ($Category in $OVFConfig.PropertyCategories) {
ForEach ($Property in $Category.ProductProperties) { ForEach ($Property in $Category.ProductProperties) {
$XMLProperty = $XML.CreateElement('Property', $XML.DocumentElement.xmlns) $XMLProperty = $XML.CreateElement('Property', $XML.DocumentElement.xmlns)
[void]$XMLProperty.SetAttribute('key', $NS.LookupNamespace('ovf'), $Property.Key) [void]$XMLProperty.SetAttribute('key', $NS.LookupNamespace('ovf'), $Property.Key)
Switch -regex ($Property.Type) { Switch -regex ($Property.Type) {
'^boolean' { '^boolean' {
@@ -309,4 +312,4 @@ ForEach ($Category in $OVFConfig.PropertyCategories) {
Write-Host "Inserted $($Category.ProductProperties.Count) new node(s) into 'ProductSection'" Write-Host "Inserted $($Category.ProductProperties.Count) new node(s) into 'ProductSection'"
} }
$XML.Save($SourceFile.FullName) $XML.Save($SourceFile.FullName)

View File

@@ -0,0 +1,140 @@
DynamicDisks:
- Description: Longhorn persistent storage
UnitSize: GB
Constraints:
Minimum: 100
Maximum: ''
PropertyCategory: 1
PropertyCategories:
- Name: 1) Existing meta-cluster
ProductProperties:
- Key: metacluster.fqdn
Type: string(1..)
Label: Meta-cluster FQDN*
Description: The FQDN of the target meta-cluster which this appliance will perform an upgrade on.
DefaultValue: meta.k8s.cluster
Configurations: '*'
UserConfigurable: true
- key: metacluster.vip
Type: ip
Label: Meta-cluster virtual IP*
Description: Meta-cluster control plane endpoint virtual IP
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: metacluster.password
Type: password(7..)
Label: Meta-cluster administrator password*
Description: 'Needed to authenticate with target meta-cluster'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- key: metacluster.token
Type: string(1..)
Label: K3s install token*
Description: Must match the token originally used for the target meta-cluster
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Name: 2) Add meta-cluster node
ProductProperties:
- Key: guestinfo.hostname
Type: string(1..15)
Label: Hostname*
Description: ''
DefaultValue: 'meta-{{ hostname.suffix }}'
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ipaddress
Type: ip
Label: IP Address*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.prefixlength
Type: int(8..32)
Label: Subnet prefix length*
Description: ''
DefaultValue: '24'
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.dnsserver
Type: ip
Label: DNS server*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.gateway
Type: ip
Label: Gateway*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ntpserver
Type: string(1..)
Label: Time server*
Description: A comma-separated list of timeservers
DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
Configurations: '*'
UserConfigurable: true
- Name: 3) Common
ProductProperties:
- Key: guestinfo.rootsshkey
Type: password(1..)
Label: SSH public key*
Description: Authentication for this meta-cluster node; this line should start with 'ssh-rsa AAAAB3N'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Name: 4) Hypervisor
ProductProperties:
- Key: hv.fqdn
Type: string(1..)
Label: vCenter FQDN/IP-address*
Description: The address of the vCenter instance which this bootstrap appliance will interact with for provisioning new VM's.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: hv.username
Type: string(1..)
Label: vCenter username*
Description: The username which this bootstrap appliance will authenticate with to the vCenter instance.
DefaultValue: 'administrator@vsphere.local'
Configurations: '*'
UserConfigurable: true
- Key: hv.password
Type: password(1..)
Label: vCenter password*
Description: The password which this bootstrap appliance will authenticate with to the vCenter instance.
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
---
Variables:
- Name: hostname.suffix
Expression: |
(-join ((48..57) + (97..122) | Get-Random -Count 5 | % {[char]$_})).ToLower()

View File

@@ -1,99 +0,0 @@
DeploymentConfigurations:
- Id: small
Label: 'Ubuntu Server 20.04 [SMALL: 1 vCPU/2GB RAM]'
Description: Ubuntu Server 20.04.x
Size:
CPU: 1
Memory: 2048
- Id: large
Label: 'Ubuntu Server 20.04 [LARGE: 4 vCPU/8GB RAM]'
Description: Ubuntu Server 20.04.x
Size:
CPU: 4
Memory: 8192
DynamicDisks: []
PropertyCategories:
# - Name: 0) Deployment information
# ProductProperties:
# - Key: deployment.type
# Type: string
# Value:
# - small
# - large
# UserConfigurable: false
- Name: 1) Operating System
ProductProperties:
- Key: guestinfo.hostname
Type: string(1..15)
Label: Hostname*
Description: '(max length: 15 characters)'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.rootpw
Type: password(7..)
Label: Local root password*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.rootsshkey
Type: password(1..)
Label: Local root SSH public key*
Description: This line should start with 'ssh-rsa AAAAB3N'
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.ntpserver
Type: string(1..)
Label: Time server*
Description: A comma-separated list of timeservers
DefaultValue: 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
Configurations: '*'
UserConfigurable: true
- Name: 2) Networking
ProductProperties:
- Key: guestinfo.ipaddress
Type: ip
Label: IP Address*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.prefixlength
Type: int(8..32)
Label: Subnet prefix length*
Description: ''
DefaultValue: '24'
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.dnsserver
Type: ip
Label: DNS server*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
- Key: guestinfo.gateway
Type: ip
Label: Gateway*
Description: ''
DefaultValue: ''
Configurations: '*'
UserConfigurable: true
AdvancedOptions:
- Key: appliance.name
Value: "{{ appliance.name }}"
Required: false
- Key: appliance.version
Value: "{{ appliance.version }}"
Required: false
---
Variables:
- Name: appliance.name
Expression: |
$Parameter['appliance.name']
- Name: appliance.version
Expression: |
$Parameter['appliance.version']