Drop initial code

This commit is contained in:
Danny Bessems
2026-01-15 09:58:01 +00:00
parent 227d957219
commit 1e7c9ba5cb
228 changed files with 19883 additions and 1 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
.vscode
ignore
.version
*.log
*.tmp

36
.releaserc Normal file
View File

@@ -0,0 +1,36 @@
branches:
- name: main
channel: stable
- name: development
prerelease: rc
channel: beta
plugins:
- - "@semantic-release/commit-analyzer"
- releaseRules:
- type: backport
release: patch
- - "@semantic-release/release-notes-generator"
- presetConfig:
types:
- type: backport
section: Backports
- type: feat
section: Features
- type: fix
section: Bug Fixes
- - "@semantic-release/changelog"
- changelogFile: CHANGELOG.md
- - "@semantic-release/git"
- assets:
- CHANGELOG.md
message: "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}"
preset: conventionalcommits
ci: false

63
CHANGELOG.md Normal file
View File

@@ -0,0 +1,63 @@
## [1.0.0-rc.4](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/compare/v1.0.0-rc.3...v1.0.0-rc.4) (2026-01-07)
### Features
* TPINF-871 - README.md and CAPI deployment added. ([792907b](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/792907b901dc9d407429ac00fa4f885864fea628))
* TPINF-1346 - created RKE2 CloudInit templates for CP nodes ([c04cd9a](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/c04cd9a902e1cbf3b64d0b91573584f87ef3c17e))
* TPINF-1346 - CronJob to patch default SA in namespaces ([7687bca](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/7687bca1be97ebc3794bf492321ddb6788e57f45))
* TPINF-1346 - patch default sa to not mount token ([d5b880d](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/d5b880dd53a9f4754ed75f6c654ae9de0542edce))
* TPINF-871 - FLeet DHCP deployment changed to dhcp ([e02ccc7](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/e02ccc7cbb42af3ed5b8680494ef5609c23ce097))
* TPINF-871 - Added capi attempts to boot Rancger in Harvester ([9b5d74c](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/9b5d74cf3811fb0e165231d573af070202ae37db))
* TPINF-871 - Attempt to upgarde vcluster chart to v0.30.1 ([520f23b](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/520f23b863a6eb85a8b5b17f468c05a23c177c14))
* TPINF-871 - Default namespace commented in LB definition ([29ec3f0](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/29ec3f0377329aa63423029f63ea5a58bff9e2ab))
* TPINF-871 - Helm chart config split and Fleet update ([618d16f](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/618d16fefccc996660c1b689bf13fcf30bb4c8dc))
* TPINF-871 - make Helm deployment namespace configurable ([f125160](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/f125160031c538cd39d35dab78471ca61e7f04cd))
* TPINF-871 - RGS Helm chart updated to RnD environment ([1280bef](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/1280befc775aff54e97a82f8da4f4202d5c534ad))
## [1.0.0-rc.3](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/compare/v1.0.0-rc.2...v1.0.0-rc.3) (2025-12-16)
### Features
* TPINF-1093 - Template and cloudinit for Ubuntu 24.04 ([93ca097](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/93ca097226c0c87a2c8c382535f40a2be81b7383))
## [1.0.0-rc.2](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/compare/v1.0.0-rc.1...v1.0.0-rc.2) (2025-12-03)
### Features
* TPINF-1093 - Virtualization baselines ([b989a59](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/b989a5954fc97c4d1e7838e7775753ecaba7918f))
## 1.0.0-rc.1 (2025-10-08)
### Features
* Inital commit of CI/Docs ([95cc946](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/95cc946ed13fe45a30ed3bdce2a9368124d9e5fc))
## 1.0.0-rc.1 (2025-10-08)
### Features
* Inital commit of CI/Docs ([95cc946](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/95cc946ed13fe45a30ed3bdce2a9368124d9e5fc))
## 1.0.0-rc.1 (2025-10-08)
### Features
* Inital commit of CI/Docs ([95cc946](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/95cc946ed13fe45a30ed3bdce2a9368124d9e5fc))
## 1.0.0-rc.1 (2025-10-07)
### Features
* Semantic RC release with document integration ([f9ea626](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/f9ea6266acf2cabe57b76cfb88ee7a39891bd31d))
### Bug Fixes
* Dummy commit to trigger semantic-release ([c0b12ed](https://devstash.vanderlande.com/scm/ittp/as-vi-cnv/commit/c0b12ed052eb1baf27b89454c1265857e354899a))

3
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,3 @@
# Contributing Guidelines
[Contribution Wiki](https://devcolla.vanderlande.com/display/ITTP/How+to+contribute)

View File

@@ -1 +1,7 @@
<!-- placeholder --> <!-- Parent: 1. Product Releases -->
<!-- Parent: {{ minor_version }} -->
<!-- Parent: {{ release_version }} -->
# {{ release_version }} - TEST
TEST123

67
RELEASENOTES_IM-CNV-v100.md Executable file
View File

@@ -0,0 +1,67 @@
# 🚀 Release Notes - IM Cloud Native Virtualization v1.0.0 (im-cnv)
**Release Date:** 2025-12-01
---
## 📘 Summary
IM Cloud Native Virtualization v1.0.0 is based on SUSE Virtualization (previously known as Harvester) and offers virtualization as well as container orchestration.
Within Vanderlande this is the practical successor to Container Platform (CP2/CP3) and VI_IT_VIRTUALIZATION.
Capabilities of IM-CNV are effectively the same: running any modern Operating System in virtual machines, as well as Kubernetes orchestration of workload clusters enable all of Vanderlande's existing Modules to be run.
There are however, key differences in underlying technology and implementation specifics, which are highlighted below:
- ### Persistent Storage location (*inside* vs *outside* Kubernetes cluster)
- *CP2/CP3*: Persistent storage (provided through Longhorn) was stored *within* Kubernetes clusters on designated "worker-storage" nodes.
- *IM-CNV*: Respective volumes and replicas (still provided through Longhorn - _though_ seemingly with different `StorageClass` names) are now stored directly on the Hypervisor nodes.
**There are *no* dedicated storage node pools; cluster nodes can now be reprovisioned _without_ extensive wait periods for replication to finish.**
- ### Services of type `LoadBalancer` managed by different controllers
- *CP2/CP3*: Services of type `LoadBalancer` were managed by **MetalLB**; exposing MetalLB-specific annotations for configuration.
- *IM-CNV*: Services of type `LoadBalancer` are managed by **Harvester's integrated Cloud Controller Manager**.
**Both load balancers differ in the OSI layers they operate in (MetalLB: Layer 2 and 3, Harvester CMM: Layer 4), however feature parity is maintained for common use cases within Vanderlande.**
***NOTE:** IP-address pinning is currently not supported through annotations and requires explicit administrator intervention.*
- ### Virtual Machine templating
- *VI_IT_VIRTUALIZATION*: Virtual machine template export & import supported in the format `ova`/`ovf`; virtual machine's disks were stored as `vmdk` files.
- *IM-CNV*: No virtual machine template export & import functionality; virtual machines can be created based on disk images in the formats `qcow2`, `raw`.
**Harvester includes an addon to connect to a vCenter instance to import virtual machines directly; negating the need to export & import with an intermediate file format.**
## 🔗 Related Links
- ~~**Release Bundle**: [Link](#TBD)~~
- ~~**Changelog**: [Link](#TBD)~~
- **Jira Release**: [Link](https://devtrack.vanderlande.com/projects/TPINF/versions/144780)
- ~~**SBOM**: [Link](#TBD)~~
- **Test Results**: [Link](https://devtrack.vanderlande.com/secure/attachment/1179276/VI_IT_VI_IT_CNV-1_0_0.pdf)
## 🧩 Compatibility Matrix
- Supported Kubernetes Versions:
- **RKE2**: 1.31, 1.32, 1.33
- **K3S** _(experimental)_: 1.31, 1.32, 1.33
- Supported Guest Operating Versions:
- **Suse Linux Enterprise Server**: 15 SP6, 15 SP7
- **Suse Linux Enterprise Micro**: 6.0, 6.1
- **Ubuntu**: 22.04, 24.04
- **RHEL**: 9, 10
- **Windows**: Windows 11, Windows Server 2025 (up to and including)
- Bundled Component Versions:
- **Rancher**: v2.12
- **Longhorn**: v1.9.1
## ⚠️ Breaking Changes
Refer to aforementioned key differences to understand how migration to IM-CNV might affect your workloads.
## 🔄 Migration / Upgrade Steps
_There is no upgrade path from any version of Container Platform; Kubernetes clusters and respective workloads need to be reprovisioned_
## 📦 Delivery Artifacts
- **Installation Files:**
- [Installation Manual](https://vanderlande.sharepoint.com/:w:/r/sites/T_Technolo-17-TeamAppStackEdgeComputing/Shared%20Documents/Team%20Application%20Stack/Cloud%20Native%20Virtualization%20Installation%20Manual.docx?d=w143df45494e7454b9c00247ac76c3dc3&csf=1&web=1&e=tnswwo)
- [Operator Manual](https://vanderlande.sharepoint.com/:w:/r/sites/T_Technolo-17-TeamAppStackEdgeComputing/Shared%20Documents/Team%20Application%20Stack/Cloud%20Native%20Virtualization%20Operator%20Manual.docx?d=w5b544952345b40d9ba0e5db01a3f703c&csf=1&web=1&e=XTXc4M)
---

Binary file not shown.

99
bamboo-specs/bamboo.yaml Normal file
View File

@@ -0,0 +1,99 @@
---
version: 2
## Plan properties
plan:
project-key: ITTP
key: ASVICNV
name: AS-VI-Cloud Native Virtualization
branches:
delete:
after-deleted-days: 2
after-inactive-days: never
other:
concurrent-build-plugin: 5
all-other-apps:
buildExpiryConfig:
duration: 5
enabled: true
expiryTypeResult: true
maximumBuildsToKeep: 5
period: days
## variables used in the jobs
variables:
## OVA build variables
hostvolume: /data/bamboo/${bamboo.capability.AGENT_ID}/xml-data/build-dir/${bamboo.buildKey}
containerregistry_release: devstore.vanderlande.com:6555
containerregistry_virtual: devstore.vanderlande.com:6559
container_agent: ${bamboo.containerregistry_release}/com/vanderlande/conpl/bamboo-agent-extended:1.5.0-linuxbase
container_semrel: ${bamboo.containerregistry_virtual}/com/vanderlande/conpl/bamboo-semantic-release:v23.0.2
container_mark: kovetskiy/mark:12.2.0
## SemRel variables
httpsaccesskey_secret: BAMSCRT@0@0@FyHDe+gBcijblOU8jpGcEEwxpYBWQ0cl2NxEgACy5MidjyRlcZKAS4YXC/nLS8sOXZKHKBF3Siyeh2fdnAjOeg==
## confluence documentation patch
confluence_url: https://devcolla.vanderlande.com
confluence_username: srv.conpldocs
confluence_password: BAMSCRT@0@0@UxPtDd1NpJ/YoYuImly6ZLqS62SCxPQK5uonPqkfF94=
confluence_space: ITTP
stages:
- Prepare:
- import-variables
- semantic-release-dryrun
- Validate:
- docs-dryrun
- Documentation:
- docs-changesonly
import-variables: !include "prepare/import-variables.yaml"
semantic-release-dryrun: !include "prepare/semantic-release-dryrun.yaml"
docs-dryrun: !include "validate/docs-dryrun.yaml"
docs-changesonly: !include "validate/docs-changesonly.yaml"
branch-overrides:
- docs-.*:
stages:
- Prepare:
- import-variables
- Documentation:
- docs-dryrun
- docs-changesonly
docs-changesonly: !include "validate/docs-changesonly.yaml"
import-variables: !include "prepare/import-variables.yaml"
docs-dryrun: !include "validate/docs-dryrun.yaml"
- development:
stages:
- Prepare:
- import-variables
- semantic-release-dryrun
- Validate:
- docs-dryrun
- Release:
- semantic-release
- Documentation:
- docs-changesonly
import-variables: !include "prepare/import-variables.yaml"
semantic-release-dryrun: !include "prepare/semantic-release-dryrun.yaml"
docs-dryrun: !include "validate/docs-dryrun.yaml"
docs-changesonly: !include "validate/docs-changesonly.yaml"
semantic-release: !include "release/semantic-release.yaml"
- main|^.*.x:
stages:
- Prepare:
- import-variables
- semantic-release-dryrun
- Release:
- semantic-release
- Documentation:
- docs
import-variables: !include "prepare/import-variables.yaml"
semantic-release-dryrun: !include "prepare/semantic-release-dryrun.yaml"
docs: !include "validate/docs.yaml"
semantic-release: !include "release/semantic-release.yaml"

View File

@@ -0,0 +1,40 @@
tasks:
- script: |
#!/bin/bash
set -ex
case ${bamboo_planRepository_branch} in
main)
USER=${bamboo.release_deployer_username}
PASSWORD=${bamboo.release_deployer_password}
REPOSITORY="nlveg-gen-release-local-01"
;;
*.x)
USER=${bamboo.release_deployer_username}
PASSWORD=${bamboo.release_deployer_password}
REPOSITORY="nlveg-gen-release-local-01"
;;
*)
USER=${bamboo.snapshot_deployer_username}
PASSWORD=${bamboo.snapshot_deployer_password}
REPOSITORY="nlveg-gen-devteam-local-01"
;;
esac
# Inject custom variables into inject-variables source file (inception)
# (Bamboo does not allow proper variable substition operations)
echo -e "\nvmname=conpl_${bamboo.buildNumber}_$(date +"%m-%d-%Y")_$(echo "${bamboo.planRepository.revision}" | head -c7 -z)" >> pipeline.parameters
echo "artifactory_username=${USER}" >> pipeline.parameters
echo "artifactory_password=${PASSWORD}" >> pipeline.parameters
echo "artifactory_repository=${REPOSITORY}" >> pipeline.parameters
echo "var_file=${VAR_FILE}" >> pipeline.parameters
- inject-variables:
file: pipeline.parameters
scope: RESULT
other:
clean-working-dir: true
requirements:
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,55 @@
tasks:
- checkout:
force-clean-build: 'true'
- script: |
#!/bin/bash
set -ex
docker run --rm --user 555:555 -v ${bamboo.hostvolume}:/code -w /code \
${bamboo.container_semrel} \
npx semantic-release \
--dry-run --repository-url https://${bamboo.httpsaccesskey_secret}@devstash.vanderlande.com/scm/ittp/as-vi-cnv.git \
--verifyRelease @semantic-release/exec \
--verifyReleaseCmd 'echo "${nextRelease.version}" > .version'
# Function to determine the version tag
get_version_tag() {
if [ -f .version ]; then
echo "$(cat .version)"
else
echo "$(git describe --abbrev=0 --tags | awk '{gsub("^v", ""); print}')"
fi
}
# Function to determine the commit hash
get_commit_hash() {
echo "$(git log -1 --pretty=format:%h)"
}
# Get version tag and commit hash
version_tag=$(get_version_tag)
commit_hash=$(get_commit_hash)
override=$(git log -1 --pretty=format:%s | grep -oP '\[docs-override v\K[^\]]+') || true
# Determine gtag and template_suffix based on branch
if [[ "${bamboo_planRepository_branch}" == "main" || "${bamboo_planRepository_branch}" =~ ^[0-9]+\.[0-9]+\.x$ ]]; then
template_suffix="${version_tag}"
elif [[ "${bamboo_planRepository_branch}" == docs-* && -n $override ]]; then
version_tag="${override}"
template_suffix="${override}"
else
template_suffix="${version_tag}-${commit_hash}"
fi
# Write to pipeline.parameters
echo -e "\ngtag=${version_tag}" >> pipeline.parameters
echo -e "\ntemplate_suffix=${template_suffix}" >> pipeline.parameters
- inject-variables:
file: pipeline.parameters
scope: RESULT
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,14 @@
tasks:
- checkout:
force-clean-build: 'true'
- script: |
set -x
docker run --rm --user 555:555 -v ${bamboo.hostvolume}:/code -w /code \
${bamboo.container_semrel} \
npx semantic-release \
--repository-url https://${bamboo.httpsaccesskey_secret}@devstash.vanderlande.com/scm/ittp/as-vi-cnv.git
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,22 @@
---
## Molecule deploy and test
tasks:
- script: |
#!/bin/bash
set -ex
# Run ansible-lint for the first set of roles (cp/lifecycle)
if ! docker run --rm --volume ${bamboo.hostvolume}:/data \
--workdir=/data \
${bamboo.container_molecule} \
ansible-lint -c .ansible-lint.yml; then
echo "ERROR: Ansible Lint failed. Check the output for details."
exit 1 # Stop the script immediately
fi
echo "Ansible Lint successful for all ansible/collections/ansible_collections!"
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,10 @@
tasks:
- script: |
#!/bin/bash
set -ex
docker run --rm ${bamboo.container_jfrog} jfrog rt ping --user ${bamboo.snapshot_deployer_username} --password ${bamboo.snapshot_deployer_password} --url https://devstore.vanderlande.com/artifactory
other:
clean-working-dir: true
requirements:
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,75 @@
tasks:
- checkout:
force-clean-build: 'true'
- script: |
#!/bin/bash
set -euxo pipefail
# Ensure there's at least one previous commit
if git rev-parse HEAD~1 >/dev/null 2>&1; then
# Collect changed *.md files under docs/ (ignore deletions)
CHANGED_MD_FILES=$(git diff --name-status HEAD~1 HEAD | \
awk '$1 != "D" {print $2}' | grep '^docs/.*\.md$' || true)
else
echo "No previous commit to compare against. Skipping update."
exit 0
fi
if [[ -z "${CHANGED_MD_FILES}" ]]; then
echo "No relevant markdown files changed under docs/. Skipping Confluence update."
exit 0
fi
# Parse minor version from semantic version
MINOR_VERSION=$(echo "${bamboo.inject.gtag}" | grep -Eo "^[0-9]+\.[0-9]+")
# Inject version numbers into documentation
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" README.md
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" docs/*.md
# Create temporary folder
mkdir -p ./vi_certs
# Download latest Vanderlande CA certificates
curl https://pki.vanderlande.com/pki/VanderlandeRootCA.crt -o - | \
openssl x509 -inform DER -out ./vi_certs/VanderlandeRootCA.crt
curl https://pki.vanderlande.com/pki/VanderlandeSubordinateCA-Internal.crt \
-o ./vi_certs/VanderlandeSubordinateCA-Internal.crt
echo "---"
echo "Starting Confluence update for the following files:"
echo "${CHANGED_MD_FILES}"
echo "---"
# Since -f only accepts one file, we must loop through the list of changed files.
for file in ${CHANGED_MD_FILES}
do
echo "Processing file: ${file}"
# Run a separate docker command for each file
docker run --rm --name "confluence-docs-update" \
-v "${bamboo.hostvolume}:/code" \
-v "${bamboo.hostvolume}/vi_certs:/usr/local/share/ca-certificates" \
-w /code \
"${bamboo.container_mark}" \
/bin/bash -c "\
update-ca-certificates && \
mark -u '${bamboo.confluence_username}' \
-p '${bamboo.confluence_password}' \
-b '${bamboo.confluence_url}' \
--ci --changes-only \
--title-from-h1 \
--space ${bamboo.confluence_space} \
--parents 'IT Technology Platform/Team Devcolla'\''s/Application Stack/Harvester Cloud Native Virtualization' \
-f '${file}'"
echo "Finished processing ${file}."
echo "---"
done
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,63 @@
tasks:
- checkout:
force-clean-build: 'true'
- script: |
#!/bin/bash
set -x
# Parse minor version from semantic version
MINOR_VERSION=$(echo "${bamboo.inject.gtag}" | grep -Eo "^[0-9]+\.[0-9]+")
# Inject version numbers into documentation
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" README.md
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" docs/*.md
# Create temporary folder
mkdir -p ./vi_certs
# Download latest Vanderlande certificate authority certificates
curl https://pki.vanderlande.com/pki/VanderlandeRootCA.crt -o - | openssl x509 -inform DER -out ./vi_certs/VanderlandeRootCA.crt
curl https://pki.vanderlande.com/pki/VanderlandeSubordinateCA-Internal.crt -o ./vi_certs/VanderlandeSubordinateCA-Internal.crt
# Update README markdown file
docker run --rm --name confluence-docs-update \
-v "${bamboo.hostvolume}:/code" \
-v "${bamboo.hostvolume}/vi_certs:/usr/local/share/ca-certificates" \
-w /code \
"${bamboo.container_mark}" \
/bin/bash -c "\
update-ca-certificates && \
mark \
-u '${bamboo.confluence_username}' \
-p '${bamboo.confluence_password}' \
-b '${bamboo.confluence_url}' \
--title-from-h1 \
--space ${bamboo.confluence_space} \
--parents 'IT Technology Platform/Team Devcolla'\''s/Application Stack/Harvester Cloud Native Virtualization' \
--dry-run \
-f './README.md' || exit 1"
# Update all markdown files in docs/
docker run --rm --name confluence-docs-update \
-v "${bamboo.hostvolume}:/code" \
-v "${bamboo.hostvolume}/vi_certs:/usr/local/share/ca-certificates" \
-w /code \
"${bamboo.container_mark}" \
/bin/bash -c "\
update-ca-certificates && \
mark \
-u '${bamboo.confluence_username}' \
-p '${bamboo.confluence_password}' \
-b '${bamboo.confluence_url}' \
--ci --changes-only \
--title-from-h1 \
--space ${bamboo.confluence_space} \
--parents 'IT Technology Platform/Team Devcolla'\''s/Application Stack/Harvester Cloud Native Virtualization' \
--dry-run \
-f './docs/*.md' || exit 1"
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,61 @@
tasks:
- checkout:
force-clean-build: 'true'
- script: |
#!/bin/bash
set -x
# Parse minor version from semantic version
MINOR_VERSION=$(echo "${bamboo.inject.gtag}" | grep -Eo "^[0-9]+\.[0-9]+")
# Inject version numbers into documentation
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" README.md
sed -i "s/{{ release_version }}/${bamboo.inject.gtag}/g;s/{{ minor_version }}/${MINOR_VERSION}/g" docs/*.md
# Create temporary folder
mkdir -p ./vi_certs
# Download latest Vanderlande certificate authority certificates
curl https://pki.vanderlande.com/pki/VanderlandeRootCA.crt -o - | openssl x509 -inform DER -out ./vi_certs/VanderlandeRootCA.crt
curl https://pki.vanderlande.com/pki/VanderlandeSubordinateCA-Internal.crt -o ./vi_certs/VanderlandeSubordinateCA-Internal.crt
# Update README markdown file
docker run --rm --name confluence-docs-update \
-v "${bamboo.hostvolume}:/code" \
-v "${bamboo.hostvolume}/vi_certs:/usr/local/share/ca-certificates" \
-w /code \
"${bamboo.container_mark}" \
/bin/bash -c "\
update-ca-certificates && \
mark \
-u '${bamboo.confluence_username}' \
-p '${bamboo.confluence_password}' \
-b '${bamboo.confluence_url}' \
--title-from-h1 \
--space ${bamboo.confluence_space} \
--parents 'IT Technology Platform/Team Devcolla'\''s/Application Stack/Harvester Cloud Native Virtualization' \
-f './README.md' || exit 1"
# Update all markdown files in docs/
docker run --rm --name confluence-docs-update \
-v "${bamboo.hostvolume}:/code" \
-v "${bamboo.hostvolume}/vi_certs:/usr/local/share/ca-certificates" \
-w /code \
"${bamboo.container_mark}" \
/bin/bash -c "\
update-ca-certificates && \
mark \
-u '${bamboo.confluence_username}' \
-p '${bamboo.confluence_password}' \
-b '${bamboo.confluence_url}' \
--ci --changes-only \
--title-from-h1 \
--space ${bamboo.confluence_space} \
--parents 'IT Technology Platform/Team Devcolla'\''s/Application Stack/Harvester Cloud Native Virtualization' \
-f './docs/*.md' || exit 1"
other:
clean-working-dir: true
requirements:
- system.docker.executable
- AGENT_TYPE: Linux_Base_Agent

View File

@@ -0,0 +1,120 @@
apiVersion: v1
data:
cloudInit: |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
runcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- systemctl enable --now qemu-guest-agent.service
- [sh, '/root/updates.sh']
disable_root: true
ssh_pwauth: false
groups:
- etcd
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Jn9gljJAbr9tjxD2$4D4O5YokrpYvYd5lznvtuWRPWWcREo325pEhn5r5vzfIU/1fX6werOG4LlXxNNBOkmbKaabekQ9NQL32IZOiH1
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
- name: etcd
gecos: ETCD service account
lock_passwd: true
shell: /sbin/nologin
groups: [etcd]
write_files:
- path: /root/updates.sh
permissions: '0550'
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
- path: /var/lib/rancher/rke2/server/manifests/disable-sa-automount.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: disable-automount-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: disable-automount-clusterrole
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: disable-automount-binding
subjects:
- kind: ServiceAccount
name: disable-automount-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: disable-automount-clusterrole
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: disable-default-sa-automount
namespace: kube-system
spec:
schedule: "0 0 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: disable-automount-sa
containers:
- name: kubectl-patcher
image: alpine/kubectl:1.35.0
command:
- /bin/sh
- -c
- |
for n in $(kubectl get namespaces -o=jsonpath="{.items[*]['metadata.name']}"); do
echo "Patching default SA in namespace: $n"
kubectl patch serviceaccount default -p '{"automountServiceAccountToken": false}' -n $n
done
restartPolicy: OnFailure
kind: ConfigMap
metadata:
labels:
harvesterhci.io/cloud-init-template: user
name: rke2-ubuntu-22.04-cloudinit-cp
namespace: vanderlande

View File

@@ -0,0 +1,52 @@
apiVersion: v1
data:
cloudInit: |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
runcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- systemctl enable --now qemu-guest-agent.service
- [sh, '/root/updates.sh']
disable_root: true
ssh_pwauth: false
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Jn9gljJAbr9tjxD2$4D4O5YokrpYvYd5lznvtuWRPWWcREo325pEhn5r5vzfIU/1fX6werOG4LlXxNNBOkmbKaabekQ9NQL32IZOiH1
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
write_files:
- path: /root/updates.sh
permissions: '0550'
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
kind: ConfigMap
metadata:
labels:
harvesterhci.io/cloud-init-template: user
name: rke2-ubuntu-22.04-cloudinit
namespace: vanderlande

View File

@@ -0,0 +1,120 @@
apiVersion: v1
data:
cloudInit: |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
runcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- systemctl enable --now qemu-guest-agent.service
- [sh, '/root/updates.sh']
disable_root: true
ssh_pwauth: false
groups:
- etcd
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Jn9gljJAbr9tjxD2$4D4O5YokrpYvYd5lznvtuWRPWWcREo325pEhn5r5vzfIU/1fX6werOG4LlXxNNBOkmbKaabekQ9NQL32IZOiH1
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
- name: etcd
gecos: ETCD service account
lock_passwd: true
shell: /sbin/nologin
groups: [etcd]
write_files:
- path: /root/updates.sh
permissions: '0550'
owner: root:root
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
- path: /var/lib/rancher/rke2/server/manifests/disable-sa-automount.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: disable-automount-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: disable-automount-clusterrole
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: disable-automount-binding
subjects:
- kind: ServiceAccount
name: disable-automount-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: disable-automount-clusterrole
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: disable-default-sa-automount
namespace: kube-system
spec:
schedule: "0 0 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: disable-automount-sa
containers:
- name: kubectl-patcher
image: alpine/kubectl:1.35.0
command:
- /bin/sh
- -c
- |
for n in $(kubectl get namespaces -o=jsonpath="{.items[*]['metadata.name']}"); do
echo "Patching default SA in namespace: $n"
kubectl patch serviceaccount default -p '{"automountServiceAccountToken": false}' -n $n
done
restartPolicy: OnFailure
kind: ConfigMap
metadata:
labels:
harvesterhci.io/cloud-init-template: user
name: rke2-ubuntu-24.04-cloudinit-cp
namespace: vanderlande

View File

@@ -0,0 +1,52 @@
apiVersion: v1
data:
cloudInit: |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
runcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- systemctl enable --now qemu-guest-agent.service
- [sh, '/root/updates.sh']
disable_root: true
ssh_pwauth: false
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Jn9gljJAbr9tjxD2$4D4O5YokrpYvYd5lznvtuWRPWWcREo325pEhn5r5vzfIU/1fX6werOG4LlXxNNBOkmbKaabekQ9NQL32IZOiH1
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
write_files:
- path: /root/updates.sh
permissions: '0550'
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
kind: ConfigMap
metadata:
labels:
harvesterhci.io/cloud-init-template: user
name: rke2-ubuntu-24.04-cloudinit
namespace: vanderlande

View File

@@ -0,0 +1,33 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineImage
metadata:
annotations:
harvesterhci.io/storageClassName: harvester-longhorn
finalizers:
- wrangler.cattle.io/vm-image-controller
generateName: ubuntu-22.04-
generation: 1
labels:
harvesterhci.io/image-type: raw_qcow2
harvesterhci.io/imageDisplayName: ubuntu-22.04-2025-11-25
harvesterhci.io/os-release-date: '2025-11-25'
harvesterhci.io/os-type: ubuntu
harvesterhci.io/os-version: '22.04'
name: ubuntu-22.04-7mg64
namespace: vanderlande
uid: 894bb600-bb7d-4bd3-926f-b91616cd54be
spec:
backend: backingimage
checksum: ''
displayName: ubuntu-22.04-2025-11-25
pvcName: ''
pvcNamespace: ''
retry: 3
sourceType: download
storageClassParameters:
migratable: 'true'
numberOfReplicas: '3'
staleReplicaTimeout: '30'
targetStorageClassName: harvester-longhorn
url: >-
https://cloud-images.ubuntu.com/jammy/20251125/jammy-server-cloudimg-amd64.img

View File

@@ -0,0 +1,33 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineImage
metadata:
annotations:
harvesterhci.io/storageClassName: harvester-longhorn
finalizers:
- wrangler.cattle.io/vm-image-controller
generateName: ubuntu-24.04-
generation: 1
labels:
harvesterhci.io/image-type: raw_qcow2
harvesterhci.io/imageDisplayName: ubuntu-24.04-2025-11-26
harvesterhci.io/os-release-date: '2025-11-26'
harvesterhci.io/os-type: ubuntu
harvesterhci.io/os-version: '24.04'
name: ubuntu-24.04-qhtpc
namespace: vanderlande
uid: 23b60ae3-d5bd-4b10-9587-94e56b39c018
spec:
backend: backingimage
checksum: ''
displayName: ubuntu-24.04-2025-11-26
pvcName: ''
pvcNamespace: ''
retry: 3
sourceType: download
storageClassParameters:
migratable: 'true'
numberOfReplicas: '3'
staleReplicaTimeout: '30'
targetStorageClassName: harvester-longhorn
url: >-
https://cloud-images.ubuntu.com/noble/20251126/noble-server-cloudimg-amd64.img

View File

@@ -0,0 +1,94 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplateVersion
metadata:
annotations:
template-version.harvesterhci.io/customName: m8HEQq4ebp
generateName: rke2-ubuntu-22.04-
generation: 2
labels:
template.harvesterhci.io/templateID: rke2-ubuntu-22.04
name: rke2-ubuntu-22.04-8fzp2
namespace: vanderlande
ownerReferences:
- apiVersion: harvesterhci.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: VirtualMachineTemplate
name: rke2-ubuntu-22.04
# UID of the VirtualMachineTemplate to link to
uid: 8358985a-2a3d-4d06-a656-eb5e69d3137d
# UID is of the VirtualMachineTemplateVersion used by the secret
uid: 0c581ffb-8681-4054-a3c1-078a22dc53d8
spec:
templateId: vanderlande/rke2-ubuntu-22.04
vm:
metadata:
annotations:
harvesterhci.io/enableCPUAndMemoryHotplug: 'true'
# Image StorageClass name is defined by the image suffix, i.e. ubuntu-22.04-7mg64 -> longhorn-image-7mg64
harvesterhci.io/volumeClaimTemplates: '[{"metadata":{"name":"-disk-0-q0xip","annotations":{"harvesterhci.io/imageId":"vanderlande/image-7mg64"}},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"60Gi"}},"volumeMode":"Block","storageClassName":"longhorn-image-7mg64"}}]'
template-version.harvesterhci.io/customName: m8HEQq4ebp
creationTimestamp: null
labels:
harvesterhci.io/os: ubuntu
spec:
runStrategy: RerunOnFailure
template:
metadata:
annotations:
harvesterhci.io/sshNames: '["vanderlande/harvester-cnv-node"]'
creationTimestamp: null
spec:
affinity: {}
domain:
cpu:
cores: 1
maxSockets: 16
sockets: 4
threads: 1
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: disk-0
- disk:
bus: virtio
name: cloudinitdisk
inputs:
- bus: usb
name: tablet
type: tablet
interfaces:
- bridge: {}
model: virtio
name: default
features:
acpi:
enabled: true
machine:
type: ''
memory:
guest: 8Gi
maxGuest: 32Gi
resources:
limits:
cpu: '16'
memory: 32Gi
evictionStrategy: LiveMigrateIfPossible
networks:
- multus:
networkName: vanderlande/vm-lan
name: default
terminationGracePeriodSeconds: 120
volumes:
- name: disk-0
persistentVolumeClaim:
claimName: '-disk-0-q0xip'
- cloudInitNoCloud:
networkDataSecretRef:
name: rke2-ubuntu-22.04-lbbfn
secretRef:
name: rke2-ubuntu-22.04-lbbfn
name: cloudinitdisk

View File

@@ -0,0 +1,18 @@
apiVersion: v1
data:
# Updated user data should be imported from rke2-ubuntu-22.04-cloudinit and base64 encoded
networkdata: ""
userdata: I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogZmFsc2UKcGFja2FnZV91cGdyYWRlOiBmYWxzZQpzbmFwOgogIGNvbW1hbmRzOgogICAgMDogc25hcCByZWZyZXNoIC0taG9sZD1mb3JldmVyCnBhY2thZ2VfcmVib290X2lmX3JlcXVpcmVkOiB0cnVlCnBhY2thZ2VzOgogIC0gcWVtdS1ndWVzdC1hZ2VudAogIC0geXEKICAtIGpxCgpydW5jbWQ6CiAgLSBzeXNjdGwgLXcgbmV0LmlwdjYuY29uZi5hbGwuZGlzYWJsZV9pcHY2PTEKICAtIHN5c3RlbWN0bCBlbmFibGUgLS1ub3cgcWVtdS1ndWVzdC1hZ2VudC5zZXJ2aWNlCiAgLSAtIHNoCiAgICAtIC9yb290L3VwZGF0ZXMuc2gKCmRpc2FibGVfcm9vdDogdHJ1ZQpzc2hfcHdhdXRoOiBmYWxzZQp1c2VyczoKICAtIG5hbWU6IHJhbmNoZXIKICAgIGdlY29zOiBSYW5jaGVyIHNlcnZpY2UgYWNjb3VudAogICAgaGFzaGVkX3Bhc3N3ZDogJDYkSm45Z2xqSkFicjl0anhEMiQ0RDRPNVlva3JwWXZZZDVsem52dHVXUlBXV2NSRW8zMjVwRWhuNXI1dnpmSVUvMWZYNndlck9HNExsWHhOTkJPa21iS2FhYmVrUTlOUUwzMklaT2lIMQogICAgbG9ja19wYXNzd2Q6IGZhbHNlCiAgICBzaGVsbDogL2Jpbi9iYXNoCiAgICBncm91cHM6IFsgdXNlcnMsIHN1ZG8sIGRvY2tlciBdCiAgICBzdWRvOiBBTEw9KEFMTCkKICAgIHNzaF9hdXRob3JpemVkX2tleXM6CiAgICAgIC0gJ3NzaC1lZDI1NTE5CiAgICAgICAgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUV3V25uT1RBdTBMbEFaUmN6UTBaMEt2TmxVZFBoR1FocFppZStuRjFPM3MnCgp3cml0ZV9maWxlczoKICAtIHBhdGg6IC9yb290L3VwZGF0ZXMuc2gKICAgIHBlcm1pc3Npb25zOiAnMDU1MCcKICAgIGNvbnRlbnQ6IHwKICAgICAgIyEvYmluL2Jhc2gKICAgICAgZXhwb3J0IERFQklBTl9GUk9OVEVORD1ub25pbnRlcmFjdGl2ZQogICAgICBhcHQtbWFyayBob2xkIGxpbnV4LWhlYWRlcnMtZ2VuZXJpYwogICAgICBhcHQtbWFyayBob2xkIGxpbnV4LWhlYWRlcnMtdmlydHVhbAogICAgICBhcHQtbWFyayBob2xkIGxpbnV4LWltYWdlLXZpcnR1YWwKICAgICAgYXB0LW1hcmsgaG9sZCBsaW51eC12aXJ0dWFsCiAgICAgIGFwdC1nZXQgdXBkYXRlCiAgICAgIGFwdC1nZXQgdXBncmFkZSAteQogICAgICBhcHQtZ2V0IGF1dG9yZW1vdmUgLXkgICAgCnNzaF9hdXRob3JpemVkX2tleXM6CiAgLSBzc2gtZWQyNTUxOQogICAgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUV3V25uT1RBdTBMbEFaUmN6UTBaMEt2TmxVZFBoR1FocFppZStuRjFPM3MKICAgIEhhcnZlc3RlciBDTlYgTm9kZQo=
kind: Secret
metadata:
labels:
harvesterhci.io/cloud-init-template: harvester
name: rke2-ubuntu-22.04-lbbfn
namespace: vanderlande
ownerReferences:
- apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplateVersion
name: rke2-ubuntu-22.04-8fzp2
# UID of the VirtualMachineTemplateVersion to link to
uid: 0c581ffb-8681-4054-a3c1-078a22dc53d8
type: secret

View File

@@ -0,0 +1,10 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplate
metadata:
name: rke2-ubuntu-22.04
namespace: vanderlande
# UID needs to be specified explicitly as it is used in template version.
uid: 8358985a-2a3d-4d06-a656-eb5e69d3137d
spec:
defaultVersionId: vanderlande/rke2-ubuntu-22.04-8fzp2

View File

@@ -0,0 +1,18 @@
apiVersion: v1
data:
networkdata: ""
# Updated user data should be imported from rke2-ubuntu-24.04-cloudinit and base64 encoded
userdata: I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogdHJ1ZQpwYWNrYWdlX3VwZ3JhZGU6IGZhbHNlCnNuYXA6CiAgY29tbWFuZHM6CiAgICAwOiBzbmFwIHJlZnJlc2ggLS1ob2xkPWZvcmV2ZXIKcGFja2FnZV9yZWJvb3RfaWZfcmVxdWlyZWQ6IHRydWUKcGFja2FnZXM6CiAgLSBxZW11LWd1ZXN0LWFnZW50CiAgLSB5cQogIC0ganEKCnJ1bmNtZDoKICAtIHN5c2N0bCAtdyBuZXQuaXB2Ni5jb25mLmFsbC5kaXNhYmxlX2lwdjY9MQogIC0gc3lzdGVtY3RsIGVuYWJsZSAtLW5vdyBxZW11LWd1ZXN0LWFnZW50LnNlcnZpY2UKICAtIC0gc2gKICAgIC0gL3Jvb3QvdXBkYXRlcy5zaAoKZGlzYWJsZV9yb290OiB0cnVlCnNzaF9wd2F1dGg6IGZhbHNlCnVzZXJzOgogIC0gbmFtZTogcmFuY2hlcgogICAgZ2Vjb3M6IFJhbmNoZXIgc2VydmljZSBhY2NvdW50CiAgICBoYXNoZWRfcGFzc3dkOiAkNiRKbjlnbGpKQWJyOXRqeEQyJDRENE81WW9rcnBZdllkNWx6bnZ0dVdSUFdXY1JFbzMyNXBFaG41cjV2emZJVS8xZlg2d2VyT0c0TGxYeE5OQk9rbWJLYWFiZWtROU5RTDMySVpPaUgxCiAgICBsb2NrX3Bhc3N3ZDogZmFsc2UKICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIGdyb3VwczogWyB1c2Vycywgc3VkbywgZG9ja2VyIF0KICAgIHN1ZG86IEFMTD0oQUxMOkFMTCkgQUxMCiAgICBzc2hfYXV0aG9yaXplZF9rZXlzOgogICAgICAtICdzc2gtZWQyNTUxOQogICAgICAgIEFBQUFDM056YUMxbFpESTFOVEU1QUFBQUlFd1dubk9UQXUwTGxBWlJjelEwWjBLdk5sVWRQaEdRaHBaaWUrbkYxTzNzJwoKd3JpdGVfZmlsZXM6CiAgLSBwYXRoOiAvcm9vdC91cGRhdGVzLnNoCiAgICBwZXJtaXNzaW9uczogJzA1NTAnCiAgICBjb250ZW50OiB8CiAgICAgICMhL2Jpbi9iYXNoCiAgICAgIGV4cG9ydCBERUJJQU5fRlJPTlRFTkQ9bm9uaW50ZXJhY3RpdmUKICAgICAgYXB0LW1hcmsgaG9sZCBsaW51eC1oZWFkZXJzLWdlbmVyaWMKICAgICAgYXB0LW1hcmsgaG9sZCBsaW51eC1oZWFkZXJzLXZpcnR1YWwKICAgICAgYXB0LW1hcmsgaG9sZCBsaW51eC1pbWFnZS12aXJ0dWFsCiAgICAgIGFwdC1tYXJrIGhvbGQgbGludXgtdmlydHVhbAogICAgICBhcHQtZ2V0IHVwZGF0ZQogICAgICBhcHQtZ2V0IHVwZ3JhZGUgLXkKICAgICAgYXB0LWdldCBhdXRvcmVtb3ZlIC15ICAgIApzc2hfYXV0aG9yaXplZF9rZXlzOgogIC0gc3NoLWVkMjU1MTkKICAgIEFBQUFDM056YUMxbFpESTFOVEU1QUFBQUlFd1dubk9UQXUwTGxBWlJjelEwWjBLdk5sVWRQaEdRaHBaaWUrbkYxTzNzCiAgICBIYXJ2ZXN0ZXIgQ05WIE5vZGUK
kind: Secret
metadata:
labels:
harvesterhci.io/cloud-init-template: harvester
name: rke2-ubuntu-24.04-3bl5k
namespace: vanderlande
ownerReferences:
- apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplateVersion
name: rke2-ubuntu-24.04-xrv5n
# UID of the VirtualMachineTemplateVersion to link to
uid: ad96ea4b-3d5a-4de3-adb0-0eb3c99920b2
type: secret

View File

@@ -0,0 +1,10 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplate
metadata:
name: rke2-ubuntu-24.04
namespace: vanderlande
# UID needs to be specified explicitly as it is used in template version and secret.
uid: cf644217-0be1-47f0-8c7f-2594f633da26
spec:
defaultVersionId: vanderlande/rke2-ubuntu-24.04-xrv5n

View File

@@ -0,0 +1,94 @@
apiVersion: harvesterhci.io/v1beta1
kind: VirtualMachineTemplateVersion
metadata:
annotations:
template-version.harvesterhci.io/customName: VfNPzXKspc
generateName: rke2-ubuntu-24.04-
generation: 2
labels:
template.harvesterhci.io/templateID: rke2-ubuntu-24.04
name: rke2-ubuntu-24.04-xrv5n
namespace: vanderlande
ownerReferences:
- apiVersion: harvesterhci.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: VirtualMachineTemplate
name: rke2-ubuntu-24.04
# UID of the VirtualMachineTemplate to link to
uid: cf644217-0be1-47f0-8c7f-2594f633da26
# UID is of the VirtualMachineTemplateVersion used by the secret
uid: ad96ea4b-3d5a-4de3-adb0-0eb3c99920b2
spec:
templateId: vanderlande/rke2-ubuntu-24.04
vm:
metadata:
annotations:
harvesterhci.io/enableCPUAndMemoryHotplug: "true"
# Image StorageClass name is defined by the image suffix, i.e. ubuntu-24.04-qhtpc -> longhorn-image-qhtpc
harvesterhci.io/volumeClaimTemplates: '[{"metadata":{"name":"-disk-0-jprp0","annotations":{"harvesterhci.io/imageId":"vanderlande/image-qhtpc"}},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"60Gi"}},"volumeMode":"Block","storageClassName":"longhorn-image-qhtpc"}}]'
template-version.harvesterhci.io/customName: VfNPzXKspc
creationTimestamp: null
labels:
harvesterhci.io/os: ubuntu
spec:
runStrategy: RerunOnFailure
template:
metadata:
annotations:
harvesterhci.io/sshNames: '["vanderlande/harvester-cnv-node"]'
creationTimestamp: null
spec:
affinity: {}
domain:
cpu:
cores: 1
maxSockets: 16
sockets: 4
threads: 1
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: disk-0
- disk:
bus: virtio
name: cloudinitdisk
inputs:
- bus: usb
name: tablet
type: tablet
interfaces:
- bridge: {}
model: virtio
name: default
features:
acpi:
enabled: true
machine:
type: ""
memory:
guest: 8Gi
maxGuest: 32Gi
resources:
limits:
cpu: "16"
memory: 32Gi
evictionStrategy: LiveMigrateIfPossible
networks:
- multus:
networkName: vanderlande/vm-lan
name: default
terminationGracePeriodSeconds: 120
volumes:
- name: disk-0
persistentVolumeClaim:
claimName: -disk-0-jprp0
- cloudInitNoCloud:
networkDataSecretRef:
name: rke2-ubuntu-24.04-3bl5k
secretRef:
name: rke2-ubuntu-24.04-3bl5k
name: cloudinitdisk

View File

@@ -0,0 +1,21 @@
# HELM IGNORE OPTIONS:
# Patterns to ignore when building Helm packages.
# Supports shell glob matching, relative path matching, and negation (prefixed with !)
.DS_Store
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
*.swp
*.bak
*.tmp
*.orig
*~
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,22 @@
apiVersion: v2
name: rancher-cluster-templates
version: 0.7.2
appVersion: 0.7.2
type: application
description: Hardened Rancher Cluster Templates by Rancher Government
icon: https://raw.githubusercontent.com/rancherfederal/carbide-docs/main/static/img/carbide-logo.svg
home: https://github.com/rancherfederal
sources:
- https://github.com/rancherfederal/rancher-cluster-templates
maintainers:
- name: Rancher Government
email: support@ranchergovernment.com
url: https://ranchergovernment.com
annotations:
catalog.cattle.io/type: cluster-template
catalog.cattle.io/namespace: fleet-default
classification: UNCLASSIFIED

View File

@@ -0,0 +1,105 @@
# Rancher Cluster Templates Helm Chart
| Type | Chart Version | App Version |
| :---------: | :-----------: | :---------: |
| application | `0.7.2` | `0.7.2` |
⚠️ This project is still in active development. As we continued to develop it, there will be breaking changes. ⚠️
## Supported Providers
### Currently Available
- AWS Commercial
- AWS GovCloud
- Harvester
- Digital Ocean
- VMWare vSphere
- Custom
### Pending Validation
- Microsoft Azure
## Installing the Chart
### Helm Install via Repository
```bash
helm repo add cluster-templates https://rancherfederal.github.io/rancher-cluster-templates
helm upgrade -i cluster cluster-templates/rancher-cluster-templates -n fleet-default -f values.yaml
```
## Helm Install via Registry
```bash
helm upgrade -i cluster oci://ghcr.io/rancherfederal/charts/rancher-cluster-templates -n fleet-default -f values.yaml
```
## Helm Chart Deployment Status
```bash
helm status cluster -n fleet-default
```
## Uninstalling the Chart
```bash
helm delete cluster -n fleet-default
```
## Chart/Cluster Secrets Management
### Cloud Credentials
If you do not have Cloud Credentials already created within the Rancher Manager, you can create them via `kubectl` with the command(s) below. Eventually, we will be moving these options with the Helm Chart!
#### For AWS Credentials
```bash
# with long-term credentials (accessKey and secretKey)
kubectl create secret -n cattle-global-data generic aws-creds-sts --from-literal=amazonec2credentialConfig-defaultRegion=$REGION --from-literal=amazonec2credentialConfig-accessKey=$ACCESSKEY --from-literal=amazonec2credentialConfig-secretKey=$SECRETKEY
kubectl annotate secret -n cattle-global-data aws-creds provisioning.cattle.io/driver=aws
```
```bash
# with temporary credentials (accessKey, secretKey, sessionToken)
kubectl create secret -n cattle-global-data generic aws-creds --from-literal=amazonec2credentialConfig-defaultRegion=$REGION --from-literal=amazonec2credentialConfig-accessKey=$ACCESSKEY --from-literal=amazonec2credentialConfig-secretKey=$SECRETKEY --from-literal=amazonec2credentialConfig-sessonToken=$SESSIONTOKEN
kubectl annotate secret -n cattle-global-data aws-creds provisioning.cattle.io/driver=aws
```
#### For Harvester Credentials
```bash
export CLUSTERID=$(kubectl get clusters.management.cattle.io -o=jsonpath='{range .items[?(@.metadata.labels.provider\.cattle\.io=="harvester")]}{.metadata.name}{"\n"}{end}')
kubectl create secret -n cattle-global-data generic harvester-creds --from-literal=harvestercredentialConfig-clusterId=$CLUSTERID --from-literal=harvestercredentialConfig-clusterType=imported --from-file=harvestercredentialConfig-kubeconfigContent=harvester.yaml
kubectl annotate secret -n cattle-global-data harvester-creds provisioning.cattle.io/driver=harvester
```
#### For Digital Ocean Credentials
```bash
kubectl create secret -n cattle-global-data generic digitalocean-creds --from-literal=digitaloceancredentialConfig-accessToken=$TOKEN
kubectl annotate secret -n cattle-global-data digitalocean-creds provisioning.cattle.io/driver=digitalocean
```
#### For VMWare vSphere Credentials
```bash
kubectl create secret -n cattle-global-data generic vsphere-creds --from-literal=digitaloceancredentialConfig-accessToken=$TOKEN
kubectl annotate secret -n cattle-global-data vsphere-creds provisioning.cattle.io/driver=digitalocean
```
### Registry Credentials
If you are configuring an authenticated registry and do not have Registry Credentials created in the Rancher Manager, you can create them via `kubectl` with the command below:
```bash
kubectl create secret -n fleet-default generic --type kubernetes.io/basic-auth registry-creds --from-literal=username=USERNAME --from-literal=password=PASSWORD
```

View File

@@ -0,0 +1,561 @@
# questions:
# - variable: cluster.name
# default: mycluster
# description: 'Specify the name of the cluster'
# label: 'Cluster Name'
# required: true
# type: string
# group: 'General'
# - variable: cloudCredentialSecretName
# default:
# description: 'CloudCredentialName for provisioning cluster'
# label: 'CloudCredential Name'
# type: cloudcredential
# group: 'General'
# - variable: cloudprovider
# default: custom
# description: 'Specify Infrastructure provider for underlying nodes'
# label: 'Infrastructure Provider'
# type: enum
# required: true
# options:
# - amazonec2
# - azure
# - digitalocean
# - elemental
# - harvester
# - vsphere
# - custom
# group: 'General'
# - variable: kubernetesVersion
# default: v1.31.5+rke2r1
# description: 'Specify Kubernetes Version'
# label: 'Kubernetes Version'
# type: enum
# required: true
# options:
# - v1.31.5+rke2r1
# - v1.30.9+rke2r1
# - v1.29.13+rke2r1
# group: 'General'
# - variable: localClusterAuthEndpoint.enabled
# default: false
# label: 'Local Auth Access Endpoint'
# description: 'Enable Local Auth Access Endpoint'
# type: boolean
# group: 'Auth Access Endpoint'
# show_subquestion_if: true
# subquestions:
# - variable: localClusterAuthEndpoint.fqdn
# default:
# description: 'Local Auth Access Endpoint FQDN'
# label: 'Auth Endpoint FQDN'
# type: hostname
# group: 'Auth Access Endpoint'
# - variable: localClusterAuthEndpoint.caCerts
# default:
# label: 'Auth Endpoint Cacerts'
# description: 'Local Auth Access Endpoint CACerts'
# type: multiline
# group: 'Auth Access Endpoint'
# - variable: addons.monitoring.enabled
# default: false
# label: 'Enable Monitoring'
# description: 'Enable Rancher Monitoring'
# type: boolean
# group: 'Monitoring'
# show_subquestion_if: true
# subquestions:
# - variable: monitoring.version
# default:
# label: 'Monitoring Version'
# description: 'Choose chart version of monitoring. If empty latest version will be installed'
# type: string
# group: 'Monitoring'
# - variable: monitoring.values
# default:
# label: 'Monitoring Values'
# description: 'Custom monitoring chart values'
# type: multiline
# group: 'Monitoring'
# - variable: nodepools.0.name
# default:
# description: 'Specify nodepool name'
# type: string
# label: 'Nodepool name'
# required: true
# show_if: cloudprovider=amazonec2 || cloudprovider=vsphere || cloudprovider=azure || cloudprovider=digitalocean || cloudprovider=harvester || cloudprovider=elemental
# group: 'Nodepools'
# - variable: nodepools.0.quantity
# default: 1
# description: 'Specify node count'
# type: int
# required: true
# show_if: cloudprovider=amazonec2 || cloudprovider=vsphere || cloudprovider=azure || cloudprovider=digitalocean || cloudprovider=harvester || cloudprovider=elemental
# label: 'Node count'
# group: 'Nodepools'
# - variable: nodepools.0.etcd
# default: true
# label: etcd
# type: boolean
# show_if: cloudprovider=amazonec2 || cloudprovider=vsphere || cloudprovider=azure || cloudprovider=digitalocean || cloudprovider=harvester || cloudprovider=elemental
# group: 'Nodepools'
# - variable: nodepools.0.worker
# default: true
# label: worker
# type: boolean
# show_if: cloudprovider=amazonec2 || cloudprovider=vsphere || cloudprovider=azure || cloudprovider=digitalocean || cloudprovider=harvester || cloudprovider=elemental
# group: 'Nodepools'
# - variable: nodepools.0.controlplane
# label: controlplane
# default: true
# type: boolean
# show_if: cloudprovider=amazonec2 || cloudprovider=vsphere || cloudprovider=azure || cloudprovider=digitalocean || cloudprovider=harvester || cloudprovider=elemental
# group: 'Nodepools'
# # amazonec2
# - variable: nodepools.0.region
# label: 'Region'
# default: us-east-1
# type: string
# description: 'AWS EC2 Region'
# required: true
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.zone
# label: 'Zone'
# default: a
# type: string
# description: 'AWS EC2 Zone'
# required: true
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.instanceType
# label: 'Instance Type'
# default: t3a.medium
# type: string
# description: 'AWS instance type'
# required: true
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.rootSize
# label: 'Root Disk Size'
# default: 16g
# type: string
# description: 'AWS EC2 root disk size'
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.vpcId
# label: 'VPC/SUBNET'
# default: ''
# type: string
# description: 'AWS EC2 vpc ID'
# required: true
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.iamInstanceProfile
# label: 'Instance Profile Name'
# default: ''
# type: string
# description: 'AWS EC2 Instance Profile Name'
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.ami
# label: 'AMI ID'
# default: ''
# type: string
# description: 'AWS EC2 AMI ID'
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.sshUser
# label: 'SSH Username for AMI'
# default: ubuntu
# type: string
# description: 'AWS EC2 SSH Username for AMI'
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# - variable: nodepools.0.createSecurityGroup
# label: 'Create security group'
# default: true
# type: boolean
# description: 'Whether to create `rancher-node` security group. If false, can provide with existing security group'
# show_if: cloudprovider=amazonec2
# group: 'Nodepools'
# show_subquestion_if: false
# subquestions:
# - variable: nodepools.0.securityGroups
# label: 'Security groups'
# default:
# type: string
# description: 'Using existing security groups'
# group: 'Nodepools'
# # vsphere
# - variable: nodepools.0.vcenter
# label: 'vSphere IP/hostname'
# default: ''
# type: hostname
# description: 'vSphere IP/hostname for vCenter'
# required: true
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.datacenter
# label: 'Vsphere Datacenter'
# default: ''
# type: hostname
# description: 'vSphere datacenter for virtual machine'
# required: true
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.datastore
# label: 'Vsphere Datastore'
# default: ''
# type: string
# description: 'vSphere datastore for virtual machine'
# required: true
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.datastoreCluster
# label: 'Vsphere DatastoreCluster'
# default: ''
# type: string
# description: 'vSphere datastore cluster for virtual machine'
# required: true
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.diskSize
# label: 'Disk Size'
# default: '20480'
# type: string
# description: 'vSphere size of disk for docker VM (in MB)'
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.memorySize
# label: 'Memory Size'
# default: '2048'
# type: string
# description: 'vSphere size of memory for docker VM (in MB)'
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.network
# label: 'Network'
# default: ''
# type: string
# description: 'vSphere network where the virtual machine will be attached'
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.pool
# label: 'Resource Pool'
# default: ''
# type: string
# description: 'vSphere resource pool for docker VM'
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.sshPort
# label: 'SSH Port'
# default: '22'
# type: string
# description: 'If using a non-B2D image you can specify the ssh port'
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.sshUserGroup
# label: 'SSH User Group'
# default: docker:staff
# type: hostname
# description: "If using a non-B2D image the uploaded keys will need chown'ed, defaults to staff e.g. docker:staff"
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.vappIpallocationpolicy
# label: 'IP allocation policy'
# default: ''
# type: enum
# options:
# - dhcp
# - fixed
# - transient
# - fixedAllocated
# description: "'vSphere vApp IP allocation policy. Supported values are: dhcp, fixed, transient and fixedAllocated'"
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# - variable: nodepools.0.vappIpprotocol
# label: 'IP protocol'
# default: ''
# type: enum
# options:
# - IPv4
# - IPv6
# description: "'vSphere vApp IP protocol for this deployment. Supported values are: IPv4 and IPv6'"
# show_if: cloudprovider=vsphere
# group: 'Nodepools'
# # harvester
# - variable: nodepools.0.diskSize
# label: 'Disk Size'
# default: 40
# type: string
# description: 'Size of virtual hard disk in GB'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.diskBus
# label: 'Disk Bus Type'
# default: string
# type: virtio
# description: 'harvester disk type'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.cpuCount
# label: 'CPUs'
# default: 2
# type: string
# description: 'number of CPUs for your VM'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.memorySize
# label: 'Memory Size'
# default: 4
# type: string
# description: 'Memory for VM in GB (available RAM)'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.networkName
# label: 'Network'
# default: default/network-name-1
# type: string
# description: 'Name of vlan network in harvester'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.imageName
# label: 'Name of Image'
# default: default/image-rand
# type: string
# description: 'Name of image in harvester'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.vmNamespace
# label: 'vm Namespace'
# default: default
# type: string
# description: 'namespace to deploy the VM to'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# - variable: nodepools.0.sshUser
# label: 'SSH User'
# default: ubuntu
# type: string
# description: 'SSH username'
# show_if: cloudprovider=harvester
# group: 'Nodepools'
# # digitalocean
# - variable: nodepools.0.image
# label: 'Image'
# default: ubuntu-20-04-x64
# type: string
# description: 'Digital Ocean Image'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.backups
# label: 'Backup'
# default: false
# type: boolean
# description: 'enable backups for droplet'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.ipv6
# label: 'IPv6'
# default: false
# type: boolean
# description: 'enable ipv6 for droplet'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.monitoring
# label: 'Monitoring'
# default: false
# type: boolean
# description: 'enable monitoring for droplet'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.privateNetworking
# label: 'Private Networking'
# default: false
# type: boolean
# description: 'enable private networking for droplet'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.region
# label: 'Region'
# default: sfo3
# type: string
# description: 'Digital Ocean region'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.size
# label: 'Size'
# default: s-4vcpu-8gb
# type: string
# description: 'Digital Ocean size'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.userdata
# label: 'Userdata'
# default:
# type: multiline
# description: 'File contents for userdata'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.sshPort
# label: 'SSH Port'
# default: 22
# type: string
# description: 'SSH port'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# - variable: nodepools.0.sshUser
# label: 'SSH User'
# default: root
# type: string
# description: 'SSH username'
# show_if: cloudprovider=digitalocean
# group: 'Nodepools'
# # azure
# - variable: nodepools.0.availabilitySet
# label: 'Availability Set'
# default: docker-machine
# type: string
# description: 'Azure Availability Set to place the virtual machine into'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.diskSize
# label: 'Disk Size'
# default: ''
# type: string
# description: 'Disk size if using managed disk(Gib)'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.dns
# label: 'DNS'
# default: ''
# type: string
# description: 'A unique DNS label for the public IP adddress'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.environment
# label: 'Environment'
# default: AzurePublicCloud
# type: enum
# options:
# - AzurePublicCloud
# - AzureGermanCloud
# - AzureChinaCloud
# - AzureUSGovernmentCloud
# description: 'Azure environment'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.faultDomainCount
# label: 'Fault Domain Count'
# default: ''
# type: string
# description: 'Fault domain count to use for availability set'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.image
# label: 'Image'
# default: canonical:UbuntuServer:18.04-LTS:latest
# type: string
# description: 'Azure virtual machine OS image'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.location
# label: 'Location'
# default: westus
# type: string
# description: 'Azure region to create the virtual machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.managedDisks
# label: 'Managed Disks'
# default: false
# type: boolean
# description: 'Configures VM and availability set for managed disks'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.noPublicIp
# label: 'No Public IP'
# default: false
# type: boolean
# description: 'Do not create a public IP address for the machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.privateIpAddress
# label: 'Private IP Address'
# default: ''
# type: string
# description: 'Specify a static private IP address for the machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.resourceGroup
# label: 'Resource Group'
# default: docker-machine
# type: string
# description: 'Azure Resource Group name (will be created if missing)'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.size
# label: 'Size'
# default: 'Standard_D2_v2'
# type: string
# description: 'Size for Azure Virtual Machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.sshUser
# label: 'SSH Username'
# default: docker-user
# type: string
# description: 'Username for SSH login'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.staticPublicIp
# label: 'Static Public IP'
# default: false
# type: boolean
# description: 'Assign a static public IP address to the machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.storageType
# label: 'Storage Account'
# default: 'Standard_LRS'
# type: string
# description: 'Type of Storage Account to host the OS Disk for the machine'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.subnet
# label: 'Subnet'
# default: docker-machine
# type: string
# description: 'Azure Subnet Name to be used within the Virtual Network'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.subnetPrefix
# label: 'Subnet Prefix'
# default: '192.168.0.0/16'
# type: string
# description: 'Private CIDR block to be used for the new subnet, should comply RFC 1918'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.updateDomainCount
# label: 'Update Domain Count'
# default: ''
# type: string
# description: 'Update domain count to use for availability set'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.usePrivateIp
# label: 'Use Private IP'
# default: false
# type: boolean
# description: 'Azure Subnet Name to be used within the Virtual Network'
# show_if: cloudprovider=azure
# group: 'Nodepools'
# - variable: nodepools.0.vnet
# label: 'Vnet'
# default: 'docker-machine-vnet'
# type: string
# description: 'Azure Virtual Network name to connect the virtual machine (in [resourcegroup:]name format)'
# show_if: cloudprovider=azure
# group: 'Nodepools'

View File

@@ -0,0 +1,6 @@
Congratulations! You've successfully deployed a cluster using the Helm Chart for Rancher Cluster Templates by Rancher Government. Please be patient for the cluster to provision and deploy on your infrastructure.
View the Cluster -> https://{{ .Values.rancher.cattle.url | default "<rancher-url>" }}/dashboard/c/_/manager/provisioning.cattle.io.cluster/fleet-default/{{ .Values.cluster.name }}
View the Docs -> https://github.com/rancherfederal/rancher-cluster-templates

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "rancher-cluster-templates.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "rancher-cluster-templates.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "rancher-cluster-templates.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "rancher-cluster-templates.labels" -}}
helm.sh/chart: {{ include "rancher-cluster-templates.chart" . }}
{{ include "rancher-cluster-templates.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "rancher-cluster-templates.selectorLabels" -}}
app.kubernetes.io/name: {{ include "rancher-cluster-templates.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "rancher-cluster-templates.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "rancher-cluster-templates.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,438 @@
{{- $clustername := .Values.cluster.name -}}
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
{{- if .Values.cluster.labels }}
labels:
{{ toYaml .Values.cluster.labels | indent 4 }}
{{- end }}
{{- if .Values.cluster.annotations }}
annotations:
{{ toYaml .Values.cluster.annotations | indent 4 }}
{{- end }}
name: {{ .Values.cluster.name }}
namespace: fleet-default
spec:
{{- if .Values.cluster.config.agentEnvVars }}
agentEnvVars:
{{ toYaml .Values.cluster.config.agentEnvVars | indent 4 }}
{{- end }}
{{- if .Values.cloudCredentialSecretName }}
cloudCredentialSecretName: cattle-global-data:{{ .Values.cloudCredentialSecretName }}
{{- end }}
# clusterAPIConfig:
# clusterAgentDeploymentCustomization:
{{- if .Values.cluster.config.defaultClusterRoleForProjectMembers }}
defaultClusterRoleForProjectMembers: {{ .Values.cluster.config.defaultClusterRoleForProjectMembers }}
{{- end }}
{{- if .Values.cluster.config.defaultPodSecurityAdmissionConfigurationTemplateName }}
defaultPodSecurityAdmissionConfigurationTemplateName: {{ .Values.cluster.config.defaultPodSecurityAdmissionConfigurationTemplateName }}
{{- end }}
{{- if .Values.cluster.config.defaultPodSecurityPolicyTemplateName }}
defaultPodSecurityPolicyTemplateName: {{ .Values.cluster.config.defaultPodSecurityPolicyTemplateName }}
{{- end }}
enableNetworkPolicy: {{ .Values.cluster.config.enableNetworkPolicy }}
# fleetAgentDeploymentCustomization:
{{- if .Values.cluster.config.kubernetesVersion }}
kubernetesVersion: {{ .Values.cluster.config.kubernetesVersion }}
{{- end }}
{{- if eq .Values.cluster.config.localClusterAuthEndpoint.enabled true }}
localClusterAuthEndpoint:
enabled: {{ .Values.cluster.config.localClusterAuthEndpoint.enabled }}
fqdn: {{ .Values.cluster.config.localClusterAuthEndpoint.fqdn }}
caCerts: {{ .Values.cluster.config.localClusterAuthEndpoint.caCerts }}
{{- else }}
localClusterAuthEndpoint:
enabled: false
{{- end }}
# redeploySystemAgentGeneration:
rkeConfig:
{{- with $.Values.cluster.config.chartValues }}
chartValues:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with $.Values.cluster.config.additionalManifests }}
additionalManifest:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if .Values.cluster.config.etcd }}
etcd:
disableSnapshots: {{ .Values.cluster.config.etcd.disableSnapshots }}
snapshotRetention: {{ .Values.cluster.config.etcd.snapshotRetention }}
snapshotScheduleCron: {{ .Values.cluster.config.etcd.snapshotScheduleCron }}
{{- if .Values.cluster.config.etcd.s3 }}
s3:
bucket: {{ .Values.cluster.config.etcd.s3.bucket }}
cloudCredentialName: cattle-global-data:{{ .Values.cluster.config.etcd.s3.cloudCredentialSecretName }}
{{- if .Values.cluster.config.etcd.s3.folder }}
folder: {{ .Values.cluster.config.etcd.s3.folder }}
{{- end }}
region: {{ .Values.cluster.config.etcd.s3.region }}
skipSSLVerify: {{ .Values.cluster.config.etcd.s3.skipSSLVerify }}
endpoint: {{ .Values.cluster.config.etcd.s3.endpoint }}
{{- if .Values.cluster.config.etcd.s3.endpointCA }}
endpointCA: |-
{{ .Values.cluster.config.etcd.s3.endpointCA | indent 10 }}
{{- end }}
{{- end }}
{{- end }}
# etcdSnapshotCreate:
# etcdSnapshotRestore:
# infrastructureRef:
{{- if .Values.cluster.config.globalConfig }}
machineGlobalConfig:
{{- if .Values.cluster.config.globalConfig.cni }}
cni: {{ .Values.cluster.config.globalConfig.cni }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.cluster_cidr }}
cluster-cidr: {{ .Values.cluster.config.globalConfig.cluster_cidr }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.service_cidr }}
service-cidr: {{ .Values.cluster.config.globalConfig.service_cidr }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.docker }}
docker: {{ .Values.cluster.config.globalConfig.docker }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable }}
disable: {{ .Values.cluster.config.globalConfig.disable | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_scheduler }}
disable-scheduler: {{ .Values.cluster.config.globalConfig.disable_scheduler }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_cloud_controller }}
disable-cloud-controller: {{ .Values.cluster.config.globalConfig.disable_cloud_controller }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_kube_proxy }}
disable-kube-proxy: {{ .Values.cluster.config.globalConfig.disable_kube_proxy }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.etcd_expose_metrics }}
etcd-expose-metrics: {{ .Values.cluster.config.globalConfig.etcd_expose_metrics }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.profile }}
profile: {{ .Values.cluster.config.globalConfig.profile }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.selinux }}
selinux: {{ .Values.cluster.config.globalConfig.selinux }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.tls_san }}
tls-san: {{ .Values.cluster.config.globalConfig.tls_san | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.token }}
token: {{ .Values.cluster.config.globalConfig.token }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.systemDefaultRegistry }}
system-default-registry: {{ .Values.cluster.config.globalConfig.systemDefaultRegistry }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.secrets_encryption }}
secrets-encryption: {{ .Values.cluster.config.globalConfig.secrets_encryption }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.write_kubeconfig_mode }}
write-kubeconfig-mode: {{ .Values.cluster.config.globalConfig.write_kubeconfig_mode }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.use_service_account_credentials }}
use-service-account-credentials: {{ .Values.cluster.config.globalConfig.use_service_account_credentials }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.protect_kernel_defaults }}
protect-kernel-defaults: {{ .Values.cluster.config.globalConfig.protect_kernel_defaults }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.cloud_provider_name }}
cloud-provider-name: {{ .Values.cluster.config.globalConfig.cloud_provider_name }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.cloud_provider_config }}
cloud-provider-config: {{ .Values.cluster.config.globalConfig.cloud_provider_config }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.kube_controller_manager_arg }}
kube-controller-manager-arg: {{ .Values.cluster.config.globalConfig.kube_controller_manager_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.kube_scheduler_arg }}
kube-scheduler-arg: {{ .Values.cluster.config.globalConfig.kube_scheduler_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.kube_apiserver_arg }}
kube-apiserver-arg: {{ .Values.cluster.config.globalConfig.kube_apiserver_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.kubelet_proxy_arg }}
kubelet-proxy-arg: {{ .Values.cluster.config.globalConfig.kubelet_proxy_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.kubelet_arg }}
kubelet-arg: {{ .Values.cluster.config.globalConfig.kubelet_arg | toRawJson }}
{{- end }}
{{- end }}
# machinePoolDefaults:
{{- if ne .Values.cloudprovider "custom" }}
machinePools:
{{- if .Values.nodepools }} {{ range $index, $nodepool := .Values.nodepools }}
- name: {{ $nodepool.name }}
quantity: {{ $nodepool.quantity }}
controlPlaneRole: {{ $nodepool.controlplane }}
etcdRole: {{ $nodepool.etcd }}
workerRole: {{ $nodepool.worker }}
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 8 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 8 }}
{{- end }}
machineConfigRef:
{{- if eq $.Values.cloudprovider "amazonec2" }}
kind: Amazonec2Config
{{- else if eq $.Values.cloudprovider "vsphere" }}
kind: VmwarevsphereConfig
{{- else if eq $.Values.cloudprovider "harvester" }}
kind: HarvesterConfig
{{- else if eq $.Values.cloudprovider "digitalocean" }}
kind: DigitaloceanConfig
{{- else if eq $.Values.cloudprovider "azure" }}
kind: AzureConfig
{{- else if eq $.Values.cloudprovider "elemental" }}
apiVersion: elemental.cattle.io/v1beta1
kind: MachineInventorySelectorTemplate
{{- end}}
name: {{ $clustername }}-{{ $nodepool.name }}
displayName: {{ $nodepool.displayName | default $nodepool.name }}
{{- if $nodepool.drainBeforeDelete }}
drainBeforeDelete: {{ $nodepool.drainBeforeDelete }}
{{- end }}
{{- if $nodepool.drainBeforeDeleteTimeout }}
drainBeforeDeleteTimeout: {{ $nodepool.drainBeforeDeleteTimeout }}
{{- end }}
{{- if $nodepool.machineDeploymentLabels }}
machineDeploymentLabels:
{{ toYaml $nodepool.machineDeploymentLabels | indent 8 }}
{{- end }}
{{- if $nodepool.machineDeploymentAnnotations }}
machineDeploymentAnnotations:
{{ toYaml $nodepool.machineDeploymentAnnotations | indent 8 }}
{{- end }}
paused: {{ $nodepool.paused }}
{{- if $nodepool.rollingUpdate }}
rollingUpdate:
maxUnavailable: {{ $nodepool.rollingUpdate.maxUnavailable }}
maxSurge: {{ $nodepool.rollingUpdate.maxSurge }}
{{- end }}
{{- if $nodepool.unhealthyNodeTimeout }}
unhealthyNodeTimeout: {{ $nodepool.unhealthyNodeTimeout }}
{{- end }}
{{- end }}
{{- end }}
{{- if or .Values.cluster.config.controlPlaneConfig .Values.cluster.config.workerConfig}}
machineSelectorConfig:
{{- if .Values.cluster.config.controlPlaneConfig }}
- config:
{{- if .Values.cluster.config.controlPlaneConfig.cni }}
cni: {{ .Values.cluster.config.controlPlaneConfig.cni }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.docker }}
docker: {{ .Values.cluster.config.controlPlaneConfig.docker }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable }}
disable: {{ .Values.cluster.config.globalConfig.disable | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_scheduler }}
disable-scheduler: {{ .Values.cluster.config.globalConfig.disable_scheduler }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_cloud_controller }}
disable-cloud-controller: {{ .Values.cluster.config.globalConfig.disable_cloud_controller }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.disable_kube_proxy }}
disable-kube-proxy: {{ .Values.cluster.config.controlPlaneConfig.disable_kube_proxy }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.etcd_expose_metrics }}
etcd-expose-metrics: {{ .Values.cluster.config.controlPlaneConfig.etcd_expose_metrics }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.profile }}
profile: {{ .Values.cluster.config.controlPlaneConfig.profile }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.selinux }}
selinux: {{ .Values.cluster.config.controlPlaneConfig.selinux }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.tls_san }}
tls-san: {{ .Values.cluster.config.controlPlaneConfig.tls_san | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.token }}
token: {{ .Values.cluster.config.controlPlaneConfig.token }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.systemDefaultRegistry }}
system-default-registry: {{ .Values.cluster.config.controlPlaneConfig.systemDefaultRegistry }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.secrets_encryption }}
secrets-encryption: {{ .Values.cluster.config.controlPlaneConfig.secrets_encryption }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.write_kubeconfig_mode }}
write-kubeconfig-mode: {{ .Values.cluster.config.controlPlaneConfig.write_kubeconfig_mode }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.use_service_account_credentials }}
use-service-account-credentials: {{ .Values.cluster.config.controlPlaneConfig.use_service_account_credentials }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.protect_kernel_defaults }}
protect-kernel-defaults: {{ .Values.cluster.config.controlPlaneConfig.protect_kernel_defaults }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.cloud_provider_name }}
cloud-provider-name: {{ .Values.cluster.config.controlPlaneConfig.cloud_provider_name }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.cloud_provider_config }}
cloud-provider-config: {{ .Values.cluster.config.controlPlaneConfig.cloud_provider_config }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.kube_controller_manager_arg }}
kube-controller-manager-arg: {{ .Values.cluster.config.controlPlaneConfig.kube_controller_manager_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.kube_scheduler_arg }}
kube-scheduler-arg: {{ .Values.cluster.config.controlPlaneConfig.kube_scheduler_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.kube_apiserver_arg }}
kube-apiserver-arg: {{ .Values.cluster.config.controlPlaneConfig.kube_apiserver_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.kubelet_proxy_arg }}
kubelet-proxy-arg: {{ .Values.cluster.config.controlPlaneConfig.kubelet_proxy_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.controlPlaneConfig.kubelet_arg }}
kubelet-arg: {{ .Values.cluster.config.controlPlaneConfig.kubelet_arg | toRawJson }}
{{- end }}
machineLabelSelector:
matchLabels:
node-role.kubernetes.io/control-plane: "true"
{{- end }}
{{- if .Values.cluster.config.workerConfig }}
- config:
{{- if .Values.cluster.config.workerConfig.cni }}
cni: {{ .Values.cluster.config.workerConfig.cni }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.docker }}
docker: {{ .Values.cluster.config.workerConfig.docker }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable }}
disable: {{ .Values.cluster.config.globalConfig.disable | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_scheduler }}
disable-scheduler: {{ .Values.cluster.config.globalConfig.disable_scheduler }}
{{- end }}
{{- if .Values.cluster.config.globalConfig.disable_cloud_controller }}
disable-cloud-controller: {{ .Values.cluster.config.globalConfig.disable_cloud_controller }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.disable_kube_proxy }}
disable-kube-proxy: {{ .Values.cluster.config.workerConfig.disable_kube_proxy }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.etcd_expose_metrics }}
etcd-expose-metrics: {{ .Values.cluster.config.workerConfig.etcd_expose_metrics }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.profile }}
profile: {{ .Values.cluster.config.workerConfig.profile }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.selinux }}
selinux: {{ .Values.cluster.config.workerConfig.selinux }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.tls_san }}
tls-san: {{ .Values.cluster.config.workerConfig.tls_san | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.token }}
token: {{ .Values.cluster.config.workerConfig.token }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.systemDefaultRegistry }}
system-default-registry: {{ .Values.cluster.config.workerConfig.systemDefaultRegistry }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.secrets_encryption }}
secrets-encryption: {{ .Values.cluster.config.workerConfig.secrets_encryption }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.write_kubeconfig_mode }}
write-kubeconfig-mode: {{ .Values.cluster.config.workerConfig.write_kubeconfig_mode }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.use_service_account_credentials }}
use-service-account-credentials: {{ .Values.cluster.config.workerConfig.use_service_account_credentials }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.protect_kernel_defaults }}
protect-kernel-defaults: {{ .Values.cluster.config.workerConfig.protect_kernel_defaults }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.cloud_provider_name }}
cloud-provider-name: {{ .Values.cluster.config.workerConfig.cloud_provider_name }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.cloud_provider_config }}
cloud-provider-config: {{ .Values.cluster.config.workerConfig.cloud_provider_config }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.kube_controller_manager_arg }}
kube-controller-manager-arg: {{ .Values.cluster.config.workerConfig.kube_controller_manager_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.kube_scheduler_arg }}
kube-scheduler-arg: {{ .Values.cluster.config.workerConfig.kube_scheduler_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.kube_apiserver_arg }}
kube-apiserver-arg: {{ .Values.cluster.config.workerConfig.kube_apiserver_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.kubelet_proxy_arg }}
kubelet-proxy-arg: {{ .Values.cluster.config.workerConfig.kubelet_proxy_arg | toRawJson }}
{{- end }}
{{- if .Values.cluster.config.workerConfig.kubelet_arg }}
kubelet-arg: {{ .Values.cluster.config.workerConfig.kubelet_arg | toRawJson }}
{{- end }}
machineLabelSelector:
matchLabels:
rke.cattle.io/worker-role: "true"
{{- end }}
{{- end }}
{{- end }}
# machineSelectorFiles:
# provisionGeneration:
{{- if and .Values.cluster.config.registries (eq .Values.cluster.config.registries.enabled true) }}
registries:
configs:
{{- range .Values.cluster.config.registries.configs }}
{{ .name }}:
authConfigSecretName: {{ .authConfigSecretName }}
caBundle: {{ .caBundle }}
insecureSkipVerify: {{ .insecureSkipVerify }}
tlsSecretName: {{ .tlsSecretName }}
{{- end }}
{{- if .Values.cluster.config.registries.mirrors }}
mirrors:
{{- range .Values.cluster.config.registries.mirrors }}
{{ .name | quote }}:
endpoint:
{{- range .endpoints }}
- {{ . }}
{{- end }}
{{- if .rewrite }}
rewrite:
{{- range $key, $value := .rewrite }}
"{{ $key }}": "{{ $value }}"
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
# rotateCertificates:
# rotateEncryptionKeys:
{{- if .Values.cluster.config.upgradeStrategy }}
upgradeStrategy:
controlPlaneConcurrency: {{ .Values.cluster.config.upgradeStrategy.controlPlaneConcurrency }}
{{- if eq .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.enabled true }}
controlPlaneDrainOptions:
enabled: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.enabled }}
deleteEmptyDirData: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.deleteEmptyDirData }}
disableEviction: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.disableEviction }}
force: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.force }}
gracePeriod: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.gracePeriod }}
ignoreDaemonSets: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.ignoreDaemonSets }}
ignoreErrors: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.ignoreErrors }}
skipWaitForDeleteTimeoutSeconds: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.skipWaitForDeleteTimeoutSeconds }}
timeout: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.timeout }}
{{- else }}
controlPlaneDrainOptions:
enabled: {{ .Values.cluster.config.upgradeStrategy.controlPlaneDrainOptions.enabled }}
{{- end }}
workerConcurrency: {{ .Values.cluster.config.upgradeStrategy.workerConcurrency }}
{{- if eq .Values.cluster.config.upgradeStrategy.workerDrainOptions.enabled true }}
workerDrainOptions:
enabled: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.enabled }}
deleteEmptyDirData: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.deleteEmptyDirData }}
disableEviction: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.disableEviction }}
force: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.force }}
gracePeriod: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.gracePeriod }}
ignoreDaemonSets: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.ignoreDaemonSets }}
ignoreErrors: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.ignoreErrors }}
skipWaitForDeleteTimeoutSeconds: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.skipWaitForDeleteTimeoutSeconds }}
timeout: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.timeout }}
{{- else }}
workerDrainOptions:
enabled: {{ .Values.cluster.config.upgradeStrategy.workerDrainOptions.enabled }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,11 @@
{{ $root := . }}
{{- range $index, $member := .Values.clusterMembers }}
apiVersion: management.cattle.io/v3
clusterName: c-m-{{ trunc 8 (sha256sum (printf "%s/%s" $root.Release.Namespace $root.Values.cluster.name)) }}
kind: ClusterRoleTemplateBinding
metadata:
name: ctrb-{{ trunc 8 (sha256sum (printf "%s/%s" $root.Release.Namespace $member.principalName )) }}
namespace: c-m-{{ trunc 8 (sha256sum (printf "%s/%s" $root.Release.Namespace $root.Values.cluster.name)) }}
roleTemplateName: {{ $member.roleTemplateName }}
userPrincipalName: {{ $member.principalName }}
{{- end }}

View File

@@ -0,0 +1,33 @@
{{- $clustername := .Values.cluster.name -}}
{{- range .Values.nodepools }}
{{- if eq .controlplane true }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineHealthCheck
metadata:
name: {{ $clustername }}-controlplane-healthcheck
namespace: fleet-default
spec:
clusterName: {{ $clustername }}
selector:
matchLabels:
cluster.x-k8s.io/control-plane: 'true'
cluster.x-k8s.io/cluster-name: {{ $clustername }}
# SAFETY FUSE:
# "40%" prevents a 1-node CP from trying to self-heal (which would kill it).
# If you have 3 nodes, this allows 1 to fail.
maxUnhealthy: 40%
# TIMEOUTS (v1beta1 uses duration strings like "10m", not integers)
nodeStartupTimeout: 600s
unhealthyConditions:
- type: Ready
status: Unknown
timeout: 300s
- type: Ready
status: "False"
timeout: 300s
{{- end }}
{{- end }}

View File

@@ -0,0 +1,25 @@
{{- $clustername := .Values.cluster.name -}}
{{- range .Values.nodepools }}
{{- if eq .worker true }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineHealthCheck
metadata:
name: {{ $clustername }}-worker-healthcheck
namespace: fleet-default
spec:
clusterName: {{ $clustername }}
selector:
matchLabels:
rke.cattle.io/worker-role: "true"
# USE $ HERE TOO
cluster.x-k8s.io/cluster-name: {{ $clustername }}
maxUnhealthy: 100%
nodeStartupTimeout: 10m
unhealthyConditions:
- type: Ready
status: "False"
timeout: 300s
{{- end }}
{{- end }}

View File

@@ -0,0 +1,201 @@
{{- if .Values.addons.monitoring }}
{{- if .Values.addons.monitoring.enabled }}
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: monitoring-crd-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "rancher-monitoring-crd"
repoName: "rancher-charts"
releaseName: "rancher-monitoring-crd"
version: {{ .Values.addons.monitoring.version }}
{{- if .Values.addons.monitoring.values }}
values:
{{ toYaml .Values.addons.monitoring.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-monitoring-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: monitoring-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "rancher-monitoring"
repoName: "rancher-charts"
releaseName: "rancher-monitoring"
version: {{ .Values.addons.monitoring.version }}
{{- if .Values.addons.monitoring.values }}
values:
{{ toYaml .Values.addons.monitoring.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-monitoring-system"
targets:
- clusterName: {{ .Values.cluster.name }}
diff:
comparePatches:
- apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
name: rancher-monitoring-admission
jsonPointers:
- /webhooks/0/failurePolicy
- apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
name: rancher-monitoring-admission
jsonPointers:
- /webhooks/0/failurePolicy
- apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
name: rancher-monitoring-kubelet
namespace: kube-system
jsonPointers:
- /spec/endpoints
---
{{- end }}
{{- end }}
{{- if .Values.addons.logging }}
{{- if .Values.addons.logging.enabled }}
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: logging-crd-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "rancher-logging-crd"
repoName: "rancher-charts"
releaseName: "rancher-logging-crd"
version: {{ .Values.addons.logging.version }}
{{- if .Values.addons.logging.values }}
values:
{{ toYaml .Values.addons.logging.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-logging-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: logging-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "rancher-logging"
repoName: "rancher-charts"
releaseName: "rancher-logging"
version: {{ .Values.addons.logging.version }}
{{- if .Values.addons.logging.values }}
values:
{{ toYaml .Values.addons.logging.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-logging-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
{{- end }}
{{- end }}
{{- if .Values.addons.longhorn }}
{{- if .Values.addons.longhorn.enabled }}
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: longhorn-crd-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "longhorn-crd"
repoName: "rancher-charts"
releaseName: "longhorn-crd"
version: {{ .Values.addons.longhorn.version }}
{{- if .Values.addons.longhorn.values }}
values:
{{ toYaml .Values.addons.longhorn.values | indent 4 }}
{{- end }}
defaultNamespace: "longhorn-system"
targets:
- clusterName: {{ .Values.cluster.name }}
diff:
comparePatches:
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: engineimages.longhorn.io
jsonPointers:
- /status/acceptedNames
- /status/conditions
- /status/storedVersions
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: nodes.longhorn.io
jsonPointers:
- /status/acceptedNames
- /status/conditions
- /status/storedVersions
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: volumes.longhorn.io
jsonPointers:
- /status/acceptedNames
- /status/conditions
- /status/storedVersions
---
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: longhorn-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "longhorn"
repoName: "rancher-charts"
releaseName: "longhorn"
version: {{ .Values.addons.longhorn.version }}
{{- if .Values.addons.longhorn.values }}
values:
{{ toYaml .Values.addons.longhorn.values | indent 4 }}
{{- end }}
defaultNamespace: "longhorn-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
{{- end }}
{{- end }}
{{- if .Values.addons.neuvector }}
{{- if .Values.addons.neuvector.enabled }}
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: neuvector-crd-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "neuvector-crd"
repoName: "rancher-charts"
releaseName: "neuvector-crd"
version: {{ .Values.addons.neuvector.version }}
{{- if .Values.addons.neuvector.values }}
values:
{{ toYaml .Values.addons.neuvector.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-neuvector-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
apiVersion: management.cattle.io/v3
kind: ManagedChart
metadata:
name: neuvector-{{ .Values.cluster.name }}
namespace: fleet-default
spec:
chart: "neuvector"
repoName: "rancher-charts"
releaseName: "neuvector"
version: {{ .Values.addons.neuvector.version }}
{{- if .Values.addons.neuvector.values }}
values:
{{ toYaml .Values.addons.neuvector.values | indent 4 }}
{{- end }}
defaultNamespace: "cattle-neuvector-system"
targets:
- clusterName: {{ .Values.cluster.name }}
---
{{- end }}
{{- end }}

View File

@@ -0,0 +1,251 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "amazonec2" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: rke-machine-config.cattle.io/v1
kind: Amazonec2Config
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
{{- if $nodepool.accessKey }}
accessKey: {{ $nodepool.accessKey }}
{{- end }}
{{- if $nodepool.ami }}
ami: {{ $nodepool.ami }}
{{- end }}
{{- if $nodepool.blockDurationMinutes }}
blockDurationMinutes: {{ $nodepool.blockDurationMinutes }}
{{- end }}
{{- if $nodepool.deviceName }}
deviceName: {{ $nodepool.deviceName }}
{{- end }}
{{- if $nodepool.encryptEbsVolume }}
encryptEbsVolume: {{ $nodepool.encryptEbsVolume }}
{{- end }}
{{- if $nodepool.endpoint }}
endpoint: {{ $nodepool.endpoint }}
{{- end }}
{{- if $nodepool.httpEndpoint }}
httpEndpoint: {{ $nodepool.httpEndpoint }}
{{- end }}
{{- if $nodepool.httpTokens }}
httpTokens: {{ $nodepool.httpTokens }}
{{- end }}
{{- if $nodepool.iamInstanceProfile }}
iamInstanceProfile: {{ $nodepool.iamInstanceProfile }}
{{- end }}
{{- if $nodepool.insecureTransport }}
insecureTransport: {{ $nodepool.insecureTransport }}
{{- end }}
{{- if $nodepool.instanceType }}
instanceType: {{ $nodepool.instanceType }}
{{- end }}
{{- if $nodepool.keypairName }}
keypairName: {{ $nodepool.keypairName }}
{{- end }}
{{- if $nodepool.kmsKey }}
kmsKey: {{ $nodepool.kmsKey }}
{{- end }}
{{- if $nodepool.monitoring }}
monitoring: {{ $nodepool.monitoring }}
{{- end }}
{{- if $nodepool.openPort}}
openPort:
{{- range $i, $port := $nodepool.openPort }}
- {{ $port | squote }}
{{- end }}
{{- end }}
{{- if $nodepool.privateAddressOnly }}
privateAddressOnly: {{ $nodepool.privateAddressOnly }}
{{- end }}
{{- if $nodepool.region }}
region: {{ $nodepool.region }}
{{- end }}
{{- if $nodepool.requestSpotInstance }}
requestSpotInstance: {{ $nodepool.requestSpotInstance }}
{{- end }}
{{- if $nodepool.retries }}
retries: {{ $nodepool.retries | squote }}
{{- end }}
{{- if $nodepool.rootSize }}
rootSize: {{ $nodepool.rootSize | squote }}
{{- end }}
{{- if $nodepool.secretKey }}
secretKey: {{ $nodepool.secretKey }}
{{- end }}
securityGroup:
{{- if $nodepool.createSecurityGroup }}
- rancher-nodes
{{- else }}
{{ toYaml $nodepool.securityGroups }}
{{- end }}
{{- if $nodepool.securityGroupReadonly }}
securityGroupReadonly: {{ $nodepool.securityGroupReadonly }}
{{- end }}
{{- if $nodepool.sessionToken }}
sessionToken: {{ $nodepool.sessionToken }}
{{- end }}
{{- if $nodepool.spotPrice }}
spotPrice: {{ $nodepool.spotPrice }}
{{- end }}
{{- if $nodepool.sshKeyContents }}
sshKeyContents: {{ $nodepool.sshKeyContents }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.subnetId }}
subnetId: {{ $nodepool.subnetId }}
{{- end }}
{{- if $nodepool.tags }}
tags: {{ $nodepool.tags }}
{{- end }}
{{- if $nodepool.useEbsOptimizedInstance }}
useEbsOptimizedInstance: {{ $nodepool.useEbsOptimizedInstance }}
{{- end }}
{{- if $nodepool.usePrivateAddress }}
usePrivateAddress: {{ $nodepool.usePrivateAddress }}
{{- end }}
{{- if $nodepool.userData }}
userdata: {{- $nodepool.userData | toYaml | indent 1 }}
{{- end }}
{{- if $nodepool.volumeType }}
volumeType: {{ $nodepool.volumeType }}
{{- end }}
{{- if $nodepool.vpcId }}
vpcId: {{ $nodepool.vpcId }}
{{- end }}
{{- if $nodepool.zone }}
zone: {{ $nodepool.zone }}
{{- end }}
---
{{- end }}
{{ $nodepool := .Values.nodepool }}
{{- if $nodepool }}
apiVersion: rke-machine-config.cattle.io/v1
kind: Amazonec2Config
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
{{- if $nodepool.accessKey }}
accessKey: {{ $nodepool.accessKey }}
{{- end }}
{{- if $nodepool.ami }}
ami: {{ $nodepool.ami }}
{{- end }}
{{- if $nodepool.blockDurationMinutes }}
blockDurationMinutes: {{ $nodepool.blockDurationMinutes }}
{{- end }}
{{- if $nodepool.deviceName }}
deviceName: {{ $nodepool.deviceName }}
{{- end }}
{{- if $nodepool.encryptEbsVolume }}
encryptEbsVolume: {{ $nodepool.encryptEbsVolume }}
{{- end }}
{{- if $nodepool.endpoint }}
endpoint: {{ $nodepool.endpoint }}
{{- end }}
{{- if $nodepool.httpEndpoint }}
httpEndpoint: {{ $nodepool.httpEndpoint }}
{{- end }}
{{- if $nodepool.httpTokens }}
httpTokens: {{ $nodepool.httpTokens }}
{{- end }}
{{- if $nodepool.iamInstanceProfile }}
iamInstanceProfile: {{ $nodepool.iamInstanceProfile }}
{{- end }}
{{- if $nodepool.insecureTransport }}
insecureTransport: {{ $nodepool.insecureTransport }}
{{- end }}
{{- if $nodepool.instanceType }}
instanceType: {{ $nodepool.instanceType }}
{{- end }}
{{- if $nodepool.keypairName }}
keypairName: {{ $nodepool.keypairName }}
{{- end }}
{{- if $nodepool.kmsKey }}
kmsKey: {{ $nodepool.kmsKey }}
{{- end }}
{{- if $nodepool.monitoring }}
monitoring: {{ $nodepool.monitoring }}
{{- end }}
{{- if $nodepool.openPort}}
openPort:
{{- range $i, $port := $nodepool.openPort }}
- {{ $port | squote }}
{{- end }}
{{- end }}
{{- if $nodepool.privateAddressOnly }}
privateAddressOnly: {{ $nodepool.privateAddressOnly }}
{{- end }}
{{- if $nodepool.region }}
region: {{ $nodepool.region }}
{{- end }}
{{- if $nodepool.requestSpotInstance }}
requestSpotInstance: {{ $nodepool.requestSpotInstance }}
{{- end }}
{{- if $nodepool.retries }}
retries: {{ $nodepool.retries | squote }}
{{- end }}
{{- if $nodepool.rootSize }}
rootSize: {{ $nodepool.rootSize | squote }}
{{- end }}
{{- if $nodepool.secretKey }}
secretKey: {{ $nodepool.secretKey }}
{{- end }}
{{- if $nodepool.createSecurityGroup }}
securityGroup:
- rancher-nodes
{{- else if $nodepool.securityGroups }}
securityGroup:
{{ toYaml $nodepool.securityGroups }}
{{- end }}
{{- if $nodepool.securityGroupReadonly }}
securityGroupReadonly: {{ $nodepool.securityGroupReadonly }}
{{- end }}
{{- if $nodepool.sessionToken }}
sessionToken: {{ $nodepool.sessionToken }}
{{- end }}
{{- if $nodepool.spotPrice }}
spotPrice: {{ $nodepool.spotPrice }}
{{- end }}
{{- if $nodepool.sshKeyContents }}
sshKeyContents: {{ $nodepool.sshKeyContents }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.subnetId }}
subnetId: {{ $nodepool.subnetId }}
{{- end }}
{{- if $nodepool.tags }}
tags: {{ $nodepool.tags }}
{{- end }}
{{- if $nodepool.useEbsOptimizedInstance }}
useEbsOptimizedInstance: {{ $nodepool.useEbsOptimizedInstance }}
{{- end }}
{{- if $nodepool.usePrivateAddress }}
usePrivateAddress: {{ $nodepool.usePrivateAddress }}
{{- end }}
{{- if $nodepool.userData }}
userdata: {{- $nodepool.userData | toYaml | indent 1 }}
{{- end }}
{{- if $nodepool.volumeType }}
volumeType: {{ $nodepool.volumeType }}
{{- end }}
{{- if $nodepool.vpcId }}
vpcId: {{ $nodepool.vpcId }}
{{- end }}
{{- if $nodepool.zone }}
zone: {{ $nodepool.zone }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,95 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "azure" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: rke-machine-config.cattle.io/v1
kind: AzureConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
availabilitySet: {{ $nodepool.availabilitySet }}
clientId: {{ $nodepool.clientId }}
customData: {{ $nodepool.customData }}
diskSize: {{ $nodepool.diskSize }}
dns: {{ $nodepool.dns }}
environment: {{ $nodepool.environment }}
faultDomainCount: {{ $nodepool.faultDomainCount }}
image: {{ $nodepool.image }}
location: {{ $nodepool.location }}
managedDisks: {{ $nodepool.managedDisks }}
noPublicIp: {{ $nodepool.noPublicIp }}
{{- if $nodepool.openPort}}
openPort:
{{- range $i, $port := $nodepool.openPort }}
- {{ $port }}
{{- end }}
{{- end }}
privateIpAddress: {{ $nodepool.privateIpAddress }}
resourceGroup: {{ $nodepool.resourceGroup }}
size: {{ $nodepool.size }}
sshUser: {{ $nodepool.sshUser }}
staticPublicIp: {{ $nodepool.staticPublicIp }}
storageType: {{ $nodepool.storageType }}
subnet: {{ $nodepool.subnet }}
subnetPrefix: {{ $nodepool.subnetPrefix }}
subscriptionId: {{ $nodepool.subscriptionId }}
updateDomainCount: {{ $nodepool.updateDomainCount }}
usePrivateIp: {{ $nodepool.usePrivateIp }}
vnet: {{ $nodepool.vnet }}
---
{{- end }}
{{ $nodepool := .Values.nodepool }}
{{- if $nodepool }}
apiVersion: rke-machine-config.cattle.io/v1
kind: AzureConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
availabilitySet: {{ $nodepool.availabilitySet }}
clientId: {{ $nodepool.clientId }}
customData: {{ $nodepool.customData }}
diskSize: {{ $nodepool.diskSize }}
dns: {{ $nodepool.dns }}
environment: {{ $nodepool.environment }}
faultDomainCount: {{ $nodepool.faultDomainCount }}
image: {{ $nodepool.image }}
location: {{ $nodepool.location }}
managedDisks: {{ $nodepool.managedDisks }}
noPublicIp: {{ $nodepool.noPublicIp }}
{{- if $nodepool.openPort}}
openPort:
{{- range $i, $port := $nodepool.openPort }}
- {{ $port }}
{{- end }}
{{- end }}
privateIpAddress: {{ $nodepool.privateIpAddress }}
resourceGroup: {{ $nodepool.resourceGroup }}
size: {{ $nodepool.size }}
sshUser: {{ $nodepool.sshUser }}
staticPublicIp: {{ $nodepool.staticPublicIp }}
storageType: {{ $nodepool.storageType }}
subnet: {{ $nodepool.subnet }}
subnetPrefix: {{ $nodepool.subnetPrefix }}
subscriptionId: {{ $nodepool.subscriptionId }}
updateDomainCount: {{ $nodepool.updateDomainCount }}
usePrivateIp: {{ $nodepool.usePrivateIp }}
vnet: {{ $nodepool.vnet }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,103 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "digitalocean" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: rke-machine-config.cattle.io/v1
kind: DigitaloceanConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
{{- if $nodepool.accessToken }}
accessToken: {{ $nodepool.accessToken }}
{{- end }}
{{- if $nodepool.backups }}
backups: {{ $nodepool.backups }}
{{- end }}
{{- if $nodepool.image }}
image: {{ $nodepool.image }}
{{- end }}
{{- if $nodepool.ipv6 }}
ipv6: {{ $nodepool.ipv6 }}
{{- end }}
{{- if $nodepool.monitoring }}
monitoring: {{ $nodepool.monitoring }}
{{- end }}
{{- if $nodepool.privateNetworking }}
privateNetworking: {{ $nodepool.privateNetworking }}
{{- end }}
{{- if $nodepool.region }}
region: {{ $nodepool.region }}
{{- end }}
{{- if $nodepool.size }}
size: {{ $nodepool.size }}
{{- end }}
{{- if $nodepool.sshKeyContents }}
sshKeyContents: {{ $nodepool.sshKeyContents }}
{{- end }}
{{- if $nodepool.sshKeyFingerprint }}
sshKeyFingerprint: {{ $nodepool.sshKeyFingerprint }}
{{- end }}
{{- if $nodepool.sshPort }}
sshPort: {{ $nodepool.sshPort | squote }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.tags }}
tags: {{ $nodepool.tags }}
{{- end }}
{{- if $nodepool.userData }}
userdata: {{- $nodepool.userData | toYaml | indent 1 }}
{{- end }}
---
{{- end }}
{{ $nodepool := .Values.nodepool }}
{{- if $nodepool }}
apiVersion: rke-machine-config.cattle.io/v1
kind: DigitaloceanConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
{{- if $nodepool.accessToken }}
accessToken: {{ $nodepool.accessToken }}
{{- end }}
{{- if $nodepool.backups }}
backups: {{ $nodepool.backups }}
{{- end }}
{{- if $nodepool.image }}
image: {{ $nodepool.image }}
{{- end }}
{{- if $nodepool.ipv6 }}
ipv6: {{ $nodepool.ipv6 }}
{{- end }}
{{- if $nodepool.monitoring }}
monitoring: {{ $nodepool.monitoring }}
{{- end }}
{{- if $nodepool.privateNetworking }}
privateNetworking: {{ $nodepool.privateNetworking }}
{{- end }}
{{- if $nodepool.region }}
region: {{ $nodepool.region }}
{{- end }}
{{- if $nodepool.size }}
size: {{ $nodepool.size }}
{{- end }}
{{- if $nodepool.sshKeyContents }}
sshKeyContents: {{ $nodepool.sshKeyContents }}
{{- end }}
{{- if $nodepool.sshKeyFingerprint }}
sshKeyFingerprint: {{ $nodepool.sshKeyFingerprint }}
{{- end }}
{{- if $nodepool.sshPort }}
sshPort: {{ $nodepool.sshPort | squote }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.tags }}
tags: {{ $nodepool.tags }}
{{- end }}
{{- if $nodepool.userData }}
userdata: {{- $nodepool.userData | toYaml | indent 1 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "elemental" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: elemental.cattle.io/v1beta1
kind: MachineInventorySelectorTemplate
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
spec:
template:
spec:
selector:
{{- toYaml $nodepool.selector | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,166 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "harvester" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: rke-machine-config.cattle.io/v1
kind: HarvesterConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
{{- if $nodepool.cloudConfig }}
cloudConfig: {{$nodepool.cloudconfig }}
{{- end }}
{{- if $nodepool.clusterId }}
clusterId: {{ $nodepool.clusterId }}
{{- end }}
{{- if $nodepool.clusterType }}
clusterType: {{ $nodepool.clusterType }}
{{- end }}
{{- if $nodepool.cpuCount }}
cpuCount: {{ $nodepool.cpuCount | squote }}
{{- end }}
{{- if $nodepool.diskBus }}
diskBus: {{ $nodepool.diskBus }}
{{- end }}
{{- if $nodepool.diskInfo }}
diskInfo: {{ $nodepool.diskInfo }}
{{- end }}
{{- if $nodepool.diskSize }}
diskSize: {{ $nodepool.diskSize | squote }}
{{- end }}
{{- if $nodepool.imageName }}
imageName: {{ $nodepool.imageName }}
{{- end }}
{{- if $nodepool.keyPairName }}
keyPairName: {{ $nodepool.keyPairName }}
{{- end }}
{{- if $nodepool.kubeconfigContent }}
kubeconfigContent: {{- $nodepool.kubeconfigContent | toYaml }}
{{- end }}
{{- if $nodepool.memorySize }}
memorySize: {{ $nodepool.memorySize | squote }}
{{- end }}
{{- if $nodepool.networkData }}
networkData: {{- $nodepool.networkData | toYaml | indent 1 }}
{{- end }}
{{- if $nodepool.networkInfo }}
networkInfo: {{ $nodepool.networkInfo }}
{{- end }}
{{- if $nodepool.networkModel }}
networkModel: {{ $nodepool.networkModel }}
{{- end }}
{{- if $nodepool.networkName }}
networkName: {{ $nodepool.networkName }}
{{- end }}
{{- if $nodepool.networkType }}
networkType: {{ $nodepool.networkType }}
{{- end }}
{{- if $nodepool.sshPassword }}
sshPassword: {{ $nodepool.sshPassword }}
{{- end }}
{{- if $nodepool.sshPort }}
sshPort: {{ $nodepool.sshPort | squote }}
{{- end }}
{{- if $nodepool.sshPrivateKeyPath }}
sshPrivateKeyPath: {{ $nodepool.sshPrivateKeyPath }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.userData }}
userData: {{ $nodepool.userData | toYaml }}
{{- end }}
{{- if $nodepool.vmAffinity }}
vmAffinity: {{ $nodepool.vmAffinity}}
{{- end }}
{{- if $nodepool.vmNamespace }}
vmNamespace: {{ $nodepool.vmNamespace }}
{{- end }}
---
{{- end }}
{{ $nodepool := .Values.nodepool }}
{{- if $nodepool }}
apiVersion: rke-machine-config.cattle.io/v1
kind: HarvesterConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
{{- if $nodepool.cloudConfig }}
cloudConfig: {{$nodepool.cloudconfig }}
{{- end }}
{{- if $nodepool.clusterId }}
clusterId: {{ $nodepool.clusterId }}
{{- end }}
{{- if $nodepool.clusterType }}
clusterType: {{ $nodepool.clusterType }}
{{- end }}
{{- if $nodepool.cpuCount }}
cpuCount: {{ $nodepool.cpuCount | squote }}
{{- end }}
{{- if $nodepool.diskBus }}
diskBus: {{ $nodepool.diskBus }}
{{- end }}
{{- if $nodepool.diskInfo }}
diskInfo: {{ $nodepool.diskInfo }}
{{- end }}
{{- if $nodepool.diskSize }}
diskSize: {{ $nodepool.diskSize | squote }}
{{- end }}
{{- if $nodepool.imageName }}
imageName: {{ $nodepool.imageName }}
{{- end }}
{{- if $nodepool.keyPairName }}
keyPairName: {{ $nodepool.keyPairName }}
{{- end }}
{{- if $nodepool.kubeconfigContent }}
kubeconfigContent: {{- $nodepool.kubeconfigContent | toYaml }}
{{- end }}
{{- if $nodepool.memorySize }}
memorySize: {{ $nodepool.memorySize | squote }}
{{- end }}
{{- if $nodepool.networkData }}
networkData: {{- $nodepool.networkData | toYaml | indent 1 }}
{{- end }}
{{- if $nodepool.networkInfo }}
networkInfo: {{ $nodepool.networkInfo }}
{{- end }}
{{- if $nodepool.networkModel }}
networkModel: {{ $nodepool.networkModel }}
{{- end }}
{{- if $nodepool.networkName }}
networkName: {{ $nodepool.networkName }}
{{- end }}
{{- if $nodepool.networkType }}
networkType: {{ $nodepool.networkType }}
{{- end }}
{{- if $nodepool.sshPassword }}
sshPassword: {{ $nodepool.sshPassword }}
{{- end }}
{{- if $nodepool.sshPort }}
sshPort: {{ $nodepool.sshPort | squote }}
{{- end }}
{{- if $nodepool.sshPrivateKeyPath }}
sshPrivateKeyPath: {{ $nodepool.sshPrivateKeyPath }}
{{- end }}
{{- if $nodepool.sshUser }}
sshUser: {{ $nodepool.sshUser }}
{{- end }}
{{- if $nodepool.userData }}
userData: {{ $nodepool.userData | toYaml }}
{{- end }}
{{- if $nodepool.vmAffinity }}
vmAffinity: {{ $nodepool.vmAffinity }}
{{- end }}
{{- if $nodepool.vmNamespace }}
vmNamespace: {{ $nodepool.vmNamespace }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,97 @@
{{- $clustername := .Values.cluster.name -}}
{{- if eq .Values.cloudprovider "vsphere" }}
{{- range $index, $nodepool := .Values.nodepools }}
apiVersion: rke-machine-config.cattle.io/v1
kind: VmwarevsphereConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
{{- if $nodepool.cfgparam }}
cfgparam: {{ $nodepool.cfgparam }}
{{- end }}
cloneFrom: {{ $nodepool.cloneFrom }}
cloudConfig: |-
{{ $nodepool.cloudConfig | indent 2 }}
cloudinit: {{ $nodepool.cloudinit }}
contentLibrary: {{ $nodepool.contentLibrary }}
cpuCount: {{ $nodepool.cpuCount | squote }}
creationType: {{ $nodepool.creationType }}
customAttribute: {{ $nodepool.customAttribute }}
datacenter: {{ $nodepool.datacenter }}
datastore: {{ $nodepool.datastore }}
datastoreCluster: {{ $nodepool.datastoreCluster }}
diskSize: {{ $nodepool.diskSize | squote }}
folder: {{ $nodepool.folder }}
hostsystem: {{ $nodepool.hostsystem }}
memorySize: {{ $nodepool.memorySize | squote }}
network: {{ $nodepool.network }}
pool: {{ $nodepool.pool }}
sshPort: {{ $nodepool.sshPort | squote }}
sshUser: {{ $nodepool.sshUser }}
sshUserGroup: {{ $nodepool.sshUserGroup }}
tag: {{ $nodepool.tag }}
vappIpallocationpolicy: {{ $nodepool.vappIpallocationpolicy }}
vappIpprotocol: {{ $nodepool.vappIpprotocol }}
vappProperty: {{ $nodepool.vappProperty }}
vappTransport: {{ $nodepool.vappTransport }}
vcenter: {{ $nodepool.vcenter }}
vcenterPort: {{ $nodepool.vcenterPort | squote }}
---
{{- end }}
{{ $nodepool := .Values.nodepool }}
{{- if $nodepool }}
apiVersion: rke-machine-config.cattle.io/v1
kind: VmwarevsphereConfig
metadata:
name: {{ $clustername }}-{{ $nodepool.name }}
namespace: fleet-default
common:
{{- if $nodepool.labels }}
labels:
{{ toYaml $nodepool.labels | indent 4 }}
{{- end }}
{{- if $nodepool.taints }}
taints:
{{ toYaml $nodepool.taints | indent 4 }}
{{- end }}
{{- if $nodepool.cfgparam }}
cfgparam: {{ $nodepool.cfgparam }}
{{- end }}
cloneFrom: {{ $nodepool.cloneFrom }}
cloudConfig: |-
{{ $nodepool.cloudConfig | indent 2 }}
cloudinit: {{ $nodepool.cloudinit }}
contentLibrary: {{ $nodepool.contentLibrary }}
cpuCount: {{ $nodepool.cpuCount | squote }}
creationType: {{ $nodepool.creationType }}
customAttribute: {{ $nodepool.customAttribute }}
datacenter: {{ $nodepool.datacenter }}
datastore: {{ $nodepool.datastore }}
datastoreCluster: {{ $nodepool.datastoreCluster }}
diskSize: {{ $nodepool.diskSize | squote }}
folder: {{ $nodepool.folder }}
hostsystem: {{ $nodepool.hostsystem }}
memorySize: {{ $nodepool.memorySize | squote }}
network: {{ $nodepool.network }}
pool: {{ $nodepool.pool }}
sshPort: {{ $nodepool.sshPort | squote }}
sshUser: {{ $nodepool.sshUser }}
sshUserGroup: {{ $nodepool.sshUserGroup }}
tag: {{ $nodepool.tag }}
vappIpallocationpolicy: {{ $nodepool.vappIpallocationpolicy }}
vappIpprotocol: {{ $nodepool.vappIpprotocol }}
vappProperty: {{ $nodepool.vappProperty }}
vappTransport: {{ $nodepool.vappTransport }}
vcenter: {{ $nodepool.vcenter }}
vcenterPort: {{ $nodepool.vcenterPort | squote }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,433 @@
# amazonec2, azure, digitalocean, harvester, vsphere, custom
cloudprovider: harvester
# cloud provider credentials
cloudCredentialSecretName: cc-mrklm
# rancher manager url
rancher:
cattle:
url: rancher-mgmt.product.lan
# cluster values
cluster:
name: default-cluster
# labels:
# key: value
config:
kubernetesVersion: v1.33.5+rke2r1
enableNetworkPolicy: true
localClusterAuthEndpoint:
enabled: false
# Pod Security Standard (Replaces PSP)
defaultPodSecurityAdmissionConfigurationTemplateName: "rancher-restricted"
globalConfig:
systemDefaultRegistry: docker.io
cni: canal
docker: false
disable_scheduler: false
disable_cloud_controller: false
disable_kube_proxy: false
etcd_expose_metrics: false
profile: 'cis'
selinux: false
secrets_encryption: true
write_kubeconfig_mode: 0600
use_service_account_credentials: false
protect_kernel_defaults: true
kube_apiserver_arg:
- "service-account-extend-token-expiration=false"
- "anonymous-auth=false"
- "enable-admission-plugins=NodeRestriction,PodSecurity,EventRateLimit,DenyServiceExternalIPs"
- "admission-control-config-file=/etc/rancher/rke2/rke2-admission.yaml"
- "audit-policy-file=/etc/rancher/rke2/audit-policy.yaml"
- "audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log"
- "audit-log-maxage=30"
- "audit-log-maxbackup=10"
- "audit-log-maxsize=100"
kubelet_arg:
# Strong Ciphers (CIS 4.2.12)
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
# PID Limit (CIS 4.2.13)
- "pod-max-pids=4096"
# Seccomp Default (CIS 4.2.14)
- "seccomp-default=true"
- "protect-kernel-defaults=true"
- "make-iptables-util-chains=true"
upgradeStrategy:
controlPlaneConcurrency: 10%
controlPlaneDrainOptions:
enabled: false
workerConcurrency: 10%
workerDrainOptions:
enabled: false
# node and nodepool(s) values
nodepools:
- name: control-plane-nodes
displayName: cp-nodes
quantity: 1
etcd: true
controlplane: true
worker: false
paused: false
cpuCount: 4
diskSize: 40
imageName: vanderlande/image-qhtpc
memorySize: 8
networkName: vanderlande/vm-lan
sshUser: rancher
vmNamespace: vanderlande
# ---------------------------------------------------------
# Cloud-Init: Creates the Security Files
# ---------------------------------------------------------
userData: &userData |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
- curl
- wget
bootcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- sysctl -w net.ipv6.conf.default.disable_ipv6=1
write_files:
# ----------------------------------------------------------------
# 1. CNI Permission Fix Script & Cron (CIS 1.1.9 Persistence)
# ----------------------------------------------------------------
- path: /usr/local/bin/fix-cni-perms.sh
permissions: '0700'
owner: root:root
content: |
#!/bin/bash
# Wait 60s on boot for RKE2 to write files
[ "$1" == "boot" ] && sleep 60
# Enforce 600 on CNI files (CIS 1.1.9)
if [ -d /etc/cni/net.d ]; then
find /etc/cni/net.d -type f -exec chmod 600 {} \;
fi
if [ -d /var/lib/cni/networks ]; then
find /var/lib/cni/networks -type f -exec chmod 600 {} \;
fi
# Every RKE2 service restart can reset CNI file permissions, so we run
# this script on reboot and daily via cron to maintain CIS compliance.
- path: /etc/cron.d/cis-cni-fix
permissions: '0644'
owner: root:root
content: |
# Run on Reboot (with delay) to fix files created during startup
@reboot root /usr/local/bin/fix-cni-perms.sh boot
# Run once daily at 00:00 to correct any drift
0 0 * * * root /usr/local/bin/fix-cni-perms.sh
# ----------------------------------------------------------------
# 2. RKE2 Admission Config
# ----------------------------------------------------------------
- path: /etc/rancher/rke2/rke2-admission.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [compliance-operator-system,kube-system, cis-operator-system, tigera-operator, calico-system, rke2-ingress-nginx, cattle-system, cattle-fleet-system, longhorn-system, cattle-neuvector-system]
- name: EventRateLimit
configuration:
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- type: Server
qps: 5000
burst: 20000
# ----------------------------------------------------------------
# 3. RKE2 Audit Policy
# ----------------------------------------------------------------
- path: /etc/rancher/rke2/audit-policy.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: None
users: ["system:kube-controller-manager", "system:kube-scheduler", "system:serviceaccount:kube-system:endpoint-controller"]
verbs: ["get", "update"]
resources:
- group: ""
resources: ["endpoints", "services", "services/status"]
- level: None
verbs: ["get"]
resources:
- group: ""
resources: ["nodes", "nodes/status", "pods", "pods/status"]
- level: None
users: ["kube-proxy"]
verbs: ["watch"]
resources:
- group: ""
resources: ["endpoints", "services", "services/status", "configmaps"]
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
omitStages:
- RequestReceived
# ----------------------------------------------------------------
# 4. Static NetworkPolicies
# ----------------------------------------------------------------
- path: /var/lib/rancher/rke2/server/manifests/cis-network-policy.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-metrics
namespace: kube-public
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-system
namespace: kube-system
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
# ----------------------------------------------------------------
# 5. Service Account Hardening
# ----------------------------------------------------------------
- path: /var/lib/rancher/rke2/server/manifests/cis-sa-config.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: kube-system
automountServiceAccountToken: false
- path: /var/lib/rancher/rke2/server/manifests/cis-sa-cron.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata: {name: sa-cleaner, namespace: kube-system}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: {name: sa-cleaner-role}
rules:
- apiGroups: [""]
resources: ["namespaces", "serviceaccounts"]
verbs: ["get", "list", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: {name: sa-cleaner-binding}
subjects: [{kind: ServiceAccount, name: sa-cleaner, namespace: kube-system}]
roleRef: {kind: ClusterRole, name: sa-cleaner-role, apiGroup: rbac.authorization.k8s.io}
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: sa-cleaner
namespace: kube-system
spec:
schedule: "0 */6 * * *" # Run every 6 hours
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cleaner
containers:
- name: cleaner
image: rancher/kubectl:v1.26.0
command:
- /bin/bash
- -c
- |
# Get all namespaces
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
# Check if default SA has automount=true (or null)
automount=$(kubectl get sa default -n $ns -o jsonpath='{.automountServiceAccountToken}')
if [ "$automount" != "false" ]; then
echo "Securing default SA in namespace: $ns"
kubectl patch sa default -n $ns -p '{"automountServiceAccountToken": false}'
fi
done
restartPolicy: OnFailure
# ----------------------------------------------------------------
# 6. OS Sysctls Hardening
# ----------------------------------------------------------------
- path: /etc/sysctl.d/60-rke2-cis.conf
permissions: '0644'
content: |
vm.overcommit_memory=1
vm.max_map_count=65530
vm.panic_on_oom=0
fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances=8192
kernel.panic=10
kernel.panic_on_oops=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.accept_source_route=0
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.all.log_martians=1
net.ipv4.conf.default.log_martians=1
net.ipv4.icmp_echo_ignore_broadcasts=1
net.ipv4.icmp_ignore_bogus_error_responses=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
fs.protected_hardlinks=1
fs.protected_symlinks=1
# ----------------------------------------------------------------
# 7. Environment & Setup Scripts
# ----------------------------------------------------------------
- path: /etc/profile.d/rke2.sh
permissions: '0644'
content: |
export PATH=$PATH:/var/lib/rancher/rke2/bin:/opt/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
- path: /root/updates.sh
permissions: '0550'
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Mas.x2i7B2cefjUy$59363FmEuoU.LiTLNRZmtemlH2W0D0SWsig22KSZ3QzOmfxeZXxdSx5wIw9wO7GXF/M9W.9SHoKVBOYj1HPX3.
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
- name: etcd
gecos: "etcd user"
shell: /sbin/nologin
system: true
lock_passwd: true
disable_root: true
ssh_pwauth: true
runcmd:
- systemctl enable --now qemu-guest-agent
- sysctl --system
- /root/updates.sh
# Immediate run of fix script
- /usr/local/bin/fix-cni-perms.sh
final_message: |
VI_CNV_CLOUD_INIT has been applied successfully.
Cluster ready for Rancher!
- name: worker-nodes
displayName: wk-nodes
quantity: 2
etcd: false
controlplane: false
worker: true
paused: false
cpuCount: 2
diskSize: 40
imageName: vanderlande/image-qmx5q
memorySize: 8
networkName: vanderlande/vm-lan
sshUser: rancher
vmNamespace: vanderlande
userData: *userData
addons:
monitoring:
enabled: false
logging:
enabled: false
longhorn:
enabled: false
neuvector:
enabled: false

View File

@@ -0,0 +1,13 @@
# The namespace on the Management Cluster where the "Cluster" CRD will be created.
# You specified 'fleet-local', which is valid for admin-level operations,
# though 'fleet-default' is also common for downstream clusters.
namespace: fleet-local
# Reference the external rancher-federal helm chart
helm:
chart: ./charts/cluster-templates
# Replace with the specific version you wish to pin (highly recommended)
releaseName: tpinf-1345-test-01
version: 0.7.2
valuesFiles:
- values.yaml

View File

@@ -0,0 +1,21 @@
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
labels:
name: tpinf-1345
namespace: fleet-local
resourceVersion: '27617825'
spec:
branch: TPINF-1345-inv-cis-hardening
clientSecretName: auth-qvn5p
correctDrift:
enabled: true
pollingInterval: 1m0s
repo: https://devstash.vanderlande.com/scm/ittp/as-vi-cnv.git
targets:
- clusterSelector:
matchExpressions:
- key: provider.cattle.io
operator: NotIn
values:
- harvester

View File

@@ -0,0 +1,365 @@
# cluster values
cluster:
name: tpinf-1345-test-01
# labels:
# key: value
config:
kubernetesVersion: v1.33.5+rke2r1
# node and nodepool(s) values
nodepools:
- name: control-plane-nodes
displayName: cp-nodes
quantity: 1
etcd: true
controlplane: true
worker: false
paused: false
cpuCount: 4
diskSize: 40
imageName: vanderlande/image-qhtpc
memorySize: 8
networkName: vanderlande/vm-lan
sshUser: rancher
vmNamespace: vanderlande
# ---------------------------------------------------------
# Cloud-Init: Creates the Security Files
# ---------------------------------------------------------
userData: &userData |
#cloud-config
package_update: false
package_upgrade: false
snap:
commands:
00: snap refresh --hold=forever
package_reboot_if_required: true
packages:
- qemu-guest-agent
- yq
- jq
- curl
- wget
bootcmd:
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- sysctl -w net.ipv6.conf.default.disable_ipv6=1
write_files:
# ----------------------------------------------------------------
# 1. CNI Permission Fix Script & Cron (CIS 1.1.9 Persistence)
# ----------------------------------------------------------------
- path: /usr/local/bin/fix-cni-perms.sh
permissions: '0700'
owner: root:root
content: |
#!/bin/bash
# Wait 60s on boot for RKE2 to write files
[ "$1" == "boot" ] && sleep 60
# Enforce 600 on CNI files (CIS 1.1.9)
if [ -d /etc/cni/net.d ]; then
find /etc/cni/net.d -type f -exec chmod 600 {} \;
fi
if [ -d /var/lib/cni/networks ]; then
find /var/lib/cni/networks -type f -exec chmod 600 {} \;
fi
# Every RKE2 service restart can reset CNI file permissions, so we run
# this script on reboot and daily via cron to maintain CIS compliance.
- path: /etc/cron.d/cis-cni-fix
permissions: '0644'
owner: root:root
content: |
# Run on Reboot (with delay) to fix files created during startup
@reboot root /usr/local/bin/fix-cni-perms.sh boot
# Run once daily at 00:00 to correct any drift
0 0 * * * root /usr/local/bin/fix-cni-perms.sh
# ----------------------------------------------------------------
# 2. RKE2 Admission Config
# ----------------------------------------------------------------
- path: /etc/rancher/rke2/rke2-admission.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [compliance-operator-system,kube-system, cis-operator-system, tigera-operator, calico-system, rke2-ingress-nginx, cattle-system, cattle-fleet-system, longhorn-system, cattle-neuvector-system]
- name: EventRateLimit
configuration:
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- type: Server
qps: 5000
burst: 20000
# ----------------------------------------------------------------
# 3. RKE2 Audit Policy
# ----------------------------------------------------------------
- path: /etc/rancher/rke2/audit-policy.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: None
users: ["system:kube-controller-manager", "system:kube-scheduler", "system:serviceaccount:kube-system:endpoint-controller"]
verbs: ["get", "update"]
resources:
- group: ""
resources: ["endpoints", "services", "services/status"]
- level: None
verbs: ["get"]
resources:
- group: ""
resources: ["nodes", "nodes/status", "pods", "pods/status"]
- level: None
users: ["kube-proxy"]
verbs: ["watch"]
resources:
- group: ""
resources: ["endpoints", "services", "services/status", "configmaps"]
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
omitStages:
- RequestReceived
# ----------------------------------------------------------------
# 4. Static NetworkPolicies
# ----------------------------------------------------------------
- path: /var/lib/rancher/rke2/server/manifests/cis-network-policy.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-metrics
namespace: kube-public
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-system
namespace: kube-system
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
# ----------------------------------------------------------------
# 5. Service Account Hardening
# ----------------------------------------------------------------
- path: /var/lib/rancher/rke2/server/manifests/cis-sa-config.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: kube-system
automountServiceAccountToken: false
- path: /var/lib/rancher/rke2/server/manifests/cis-sa-cron.yaml
permissions: '0600'
owner: root:root
content: |
apiVersion: v1
kind: ServiceAccount
metadata: {name: sa-cleaner, namespace: kube-system}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: {name: sa-cleaner-role}
rules:
- apiGroups: [""]
resources: ["namespaces", "serviceaccounts"]
verbs: ["get", "list", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: {name: sa-cleaner-binding}
subjects: [{kind: ServiceAccount, name: sa-cleaner, namespace: kube-system}]
roleRef: {kind: ClusterRole, name: sa-cleaner-role, apiGroup: rbac.authorization.k8s.io}
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: sa-cleaner
namespace: kube-system
spec:
schedule: "0 */6 * * *" # Run every 6 hours
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cleaner
containers:
- name: cleaner
image: rancher/kubectl:v1.26.0
command:
- /bin/bash
- -c
- |
# Get all namespaces
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
# Check if default SA has automount=true (or null)
automount=$(kubectl get sa default -n $ns -o jsonpath='{.automountServiceAccountToken}')
if [ "$automount" != "false" ]; then
echo "Securing default SA in namespace: $ns"
kubectl patch sa default -n $ns -p '{"automountServiceAccountToken": false}'
fi
done
restartPolicy: OnFailure
# ----------------------------------------------------------------
# 6. OS Sysctls Hardening
# ----------------------------------------------------------------
- path: /etc/sysctl.d/60-rke2-cis.conf
permissions: '0644'
content: |
vm.overcommit_memory=1
vm.max_map_count=65530
vm.panic_on_oom=0
fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances=8192
kernel.panic=10
kernel.panic_on_oops=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.accept_source_route=0
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.all.log_martians=1
net.ipv4.conf.default.log_martians=1
net.ipv4.icmp_echo_ignore_broadcasts=1
net.ipv4.icmp_ignore_bogus_error_responses=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
fs.protected_hardlinks=1
fs.protected_symlinks=1
# ----------------------------------------------------------------
# 7. Environment & Setup Scripts
# ----------------------------------------------------------------
- path: /etc/profile.d/rke2.sh
permissions: '0644'
content: |
export PATH=$PATH:/var/lib/rancher/rke2/bin:/opt/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
- path: /root/updates.sh
permissions: '0550'
content: |
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-mark hold linux-headers-generic
apt-mark hold linux-headers-virtual
apt-mark hold linux-image-virtual
apt-mark hold linux-virtual
apt-get update
apt-get upgrade -y
apt-get autoremove -y
users:
- name: rancher
gecos: Rancher service account
hashed_passwd: $6$Mas.x2i7B2cefjUy$59363FmEuoU.LiTLNRZmtemlH2W0D0SWsig22KSZ3QzOmfxeZXxdSx5wIw9wO7GXF/M9W.9SHoKVBOYj1HPX3.
lock_passwd: false
shell: /bin/bash
groups: [users, sudo, docker]
sudo: ALL=(ALL:ALL) ALL
ssh_authorized_keys:
- 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEwWnnOTAu0LlAZRczQ0Z0KvNlUdPhGQhpZie+nF1O3s'
- name: etcd
gecos: "etcd user"
shell: /sbin/nologin
system: true
lock_passwd: true
disable_root: true
ssh_pwauth: true
runcmd:
- systemctl enable --now qemu-guest-agent
- sysctl --system
- /root/updates.sh
# Immediate run of fix script
- /usr/local/bin/fix-cni-perms.sh
final_message: |
VI_CNV_CLOUD_INIT has been applied successfully.
Cluster ready for Rancher!
- name: worker-nodes
displayName: wk-nodes
quantity: 2
etcd: false
controlplane: false
worker: true
paused: false
cpuCount: 2
diskSize: 40
imageName: vanderlande/image-qmx5q
memorySize: 8
networkName: vanderlande/vm-lan
sshUser: rancher
vmNamespace: vanderlande
userData: *userData

View File

@@ -0,0 +1,25 @@
{
"name": "Kubebuilder DevContainer",
"image": "golang:1.24",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/devcontainers/features/git:1": {}
},
"runArgs": ["--network=host"],
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"extensions": [
"ms-kubernetes-tools.vscode-kubernetes-tools",
"ms-azuretools.vscode-docker"
]
}
},
"onCreateCommand": "bash .devcontainer/post-install.sh"
}

View File

@@ -0,0 +1,23 @@
#!/bin/bash
set -x
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-$(go env GOARCH)
chmod +x ./kind
mv ./kind /usr/local/bin/kind
curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/linux/$(go env GOARCH)
chmod +x kubebuilder
mv kubebuilder /usr/local/bin/
KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
curl -LO "https://dl.k8s.io/release/$KUBECTL_VERSION/bin/linux/$(go env GOARCH)/kubectl"
chmod +x kubectl
mv kubectl /usr/local/bin/kubectl
docker network create -d=bridge --subnet=172.19.0.0/24 kind
kind version
kubebuilder version
docker --version
go version
kubectl version --client

View File

@@ -0,0 +1,11 @@
# More info: https://docs.docker.com/engine/reference/builder/#dockerignore-file
# Ignore everything by default and re-include only needed files
**
# Re-include Go source files (but not *_test.go)
!**/*.go
**/*_test.go
# Re-include Go module files
!go.mod
!go.sum

View File

@@ -0,0 +1,23 @@
name: Lint
on:
push:
pull_request:
jobs:
lint:
name: Run on Ubuntu
runs-on: ubuntu-latest
steps:
- name: Clone the code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Run linter
uses: golangci/golangci-lint-action@v8
with:
version: v2.5.0

View File

@@ -0,0 +1,32 @@
name: E2E Tests
on:
push:
pull_request:
jobs:
test-e2e:
name: Run on Ubuntu
runs-on: ubuntu-latest
steps:
- name: Clone the code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Install the latest version of kind
run: |
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-$(go env GOARCH)
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
- name: Verify kind installation
run: kind version
- name: Running Test e2e
run: |
go mod tidy
make test-e2e

View File

@@ -0,0 +1,23 @@
name: Tests
on:
push:
pull_request:
jobs:
test:
name: Run on Ubuntu
runs-on: ubuntu-latest
steps:
- name: Clone the code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Running Tests
run: |
go mod tidy
make test

30
deploy/k8s-provisioner/.gitignore vendored Normal file
View File

@@ -0,0 +1,30 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
bin/*
Dockerfile.cross
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Go workspace file
go.work
# Kubernetes Generated files - skip generated files, except for vendored files
!vendor/**/zz_generated.*
# editor and IDE paraphernalia
.idea
.vscode
*.swp
*.swo
*~
# Kubeconfig might contain secrets
*.kubeconfig

View File

@@ -0,0 +1,52 @@
version: "2"
run:
allow-parallel-runners: true
linters:
default: none
enable:
- copyloopvar
- dupl
- errcheck
- ginkgolinter
- goconst
- gocyclo
- govet
- ineffassign
- lll
- misspell
- nakedret
- prealloc
- revive
- staticcheck
- unconvert
- unparam
- unused
settings:
revive:
rules:
- name: comment-spacings
- name: import-shadowing
exclusions:
generated: lax
rules:
- linters:
- lll
path: api/*
- linters:
- dupl
- lll
path: internal/*
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -0,0 +1,31 @@
# Build the manager binary
FROM golang:1.24 AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
# Copy the Go source (relies on .dockerignore to filter)
COPY . .
# Build
# the GOARCH has no default value to allow the binary to be built according to the host where the command
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]

View File

@@ -0,0 +1,250 @@
# Image URL to use all building/pushing image targets
IMG ?= controller:latest
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
GOBIN=$(shell go env GOPATH)/bin
else
GOBIN=$(shell go env GOBIN)
endif
# CONTAINER_TOOL defines the container tool to be used for building images.
# Be aware that the target commands are only tested with Docker which is
# scaffolded by default. However, you might want to replace it to use other
# tools. (i.e. podman)
CONTAINER_TOOL ?= docker
# Setting SHELL to bash allows bash commands to be executed by recipes.
# Options are set to exit when a recipe line exits non-zero or a piped command fails.
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec
.PHONY: all
all: build
##@ General
# The help target prints out all targets with their descriptions organized
# beneath their categories. The categories are represented by '##@' and the
# target descriptions by '##'. The awk command is responsible for reading the
# entire set of makefiles included in this invocation, looking for lines of the
# file as xyz: ## something, and then pretty-format the target and help. Then,
# if there's a line with ##@ something, that gets pretty-printed as a category.
# More info on the usage of ANSI control characters for terminal formatting:
# https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters
# More info on the awk command:
# http://linuxcommand.org/lc3_adv_awk.php
.PHONY: help
help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Development
.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
"$(CONTROLLER_GEN)" rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
.PHONY: generate
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
"$(CONTROLLER_GEN)" object:headerFile="hack/boilerplate.go.txt" paths="./..."
.PHONY: fmt
fmt: ## Run go fmt against code.
go fmt ./...
.PHONY: vet
vet: ## Run go vet against code.
go vet ./...
.PHONY: test
test: manifests generate fmt vet setup-envtest ## Run tests.
KUBEBUILDER_ASSETS="$(shell "$(ENVTEST)" use $(ENVTEST_K8S_VERSION) --bin-dir "$(LOCALBIN)" -p path)" go test $$(go list ./... | grep -v /e2e) -coverprofile cover.out
# TODO(user): To use a different vendor for e2e tests, modify the setup under 'tests/e2e'.
# The default setup assumes Kind is pre-installed and builds/loads the Manager Docker image locally.
# CertManager is installed by default; skip with:
# - CERT_MANAGER_INSTALL_SKIP=true
KIND_CLUSTER ?= k8s-provisioner-test-e2e
.PHONY: setup-test-e2e
setup-test-e2e: ## Set up a Kind cluster for e2e tests if it does not exist
@command -v $(KIND) >/dev/null 2>&1 || { \
echo "Kind is not installed. Please install Kind manually."; \
exit 1; \
}
@case "$$($(KIND) get clusters)" in \
*"$(KIND_CLUSTER)"*) \
echo "Kind cluster '$(KIND_CLUSTER)' already exists. Skipping creation." ;; \
*) \
echo "Creating Kind cluster '$(KIND_CLUSTER)'..."; \
$(KIND) create cluster --name $(KIND_CLUSTER) ;; \
esac
.PHONY: test-e2e
test-e2e: setup-test-e2e manifests generate fmt vet ## Run the e2e tests. Expected an isolated environment using Kind.
KIND=$(KIND) KIND_CLUSTER=$(KIND_CLUSTER) go test -tags=e2e ./test/e2e/ -v -ginkgo.v
$(MAKE) cleanup-test-e2e
.PHONY: cleanup-test-e2e
cleanup-test-e2e: ## Tear down the Kind cluster used for e2e tests
@$(KIND) delete cluster --name $(KIND_CLUSTER)
.PHONY: lint
lint: golangci-lint ## Run golangci-lint linter
"$(GOLANGCI_LINT)" run
.PHONY: lint-fix
lint-fix: golangci-lint ## Run golangci-lint linter and perform fixes
"$(GOLANGCI_LINT)" run --fix
.PHONY: lint-config
lint-config: golangci-lint ## Verify golangci-lint linter configuration
"$(GOLANGCI_LINT)" config verify
##@ Build
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager cmd/main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go
# If you wish to build the manager image targeting other platforms you can use the --platform flag.
# (i.e. docker build --platform linux/arm64). However, you must enable docker buildKit for it.
# More info: https://docs.docker.com/develop/develop-images/build_enhancements/
.PHONY: docker-build
docker-build: ## Build docker image with the manager.
$(CONTAINER_TOOL) build -t ${IMG} .
.PHONY: docker-push
docker-push: ## Push docker image with the manager.
$(CONTAINER_TOOL) push ${IMG}
# PLATFORMS defines the target platforms for the manager image be built to provide support to multiple
# architectures. (i.e. make docker-buildx IMG=myregistry/mypoperator:0.0.1). To use this option you need to:
# - be able to use docker buildx. More info: https://docs.docker.com/build/buildx/
# - have enabled BuildKit. More info: https://docs.docker.com/develop/develop-images/build_enhancements/
# - be able to push the image to your registry (i.e. if you do not set a valid value via IMG=<myregistry/image:<tag>> then the export will fail)
# To adequately provide solutions that are compatible with multiple platforms, you should consider using this option.
PLATFORMS ?= linux/arm64,linux/amd64,linux/s390x,linux/ppc64le
.PHONY: docker-buildx
docker-buildx: ## Build and push docker image for the manager for cross-platform support
# copy existing Dockerfile and insert --platform=${BUILDPLATFORM} into Dockerfile.cross, and preserve the original Dockerfile
sed -e '1 s/\(^FROM\)/FROM --platform=\$$\{BUILDPLATFORM\}/; t' -e ' 1,// s//FROM --platform=\$$\{BUILDPLATFORM\}/' Dockerfile > Dockerfile.cross
- $(CONTAINER_TOOL) buildx create --name k8s-provisioner-builder
$(CONTAINER_TOOL) buildx use k8s-provisioner-builder
- $(CONTAINER_TOOL) buildx build --push --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile.cross .
- $(CONTAINER_TOOL) buildx rm k8s-provisioner-builder
rm Dockerfile.cross
.PHONY: build-installer
build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment.
mkdir -p dist
cd config/manager && "$(KUSTOMIZE)" edit set image controller=${IMG}
"$(KUSTOMIZE)" build config/default > dist/install.yaml
##@ Deployment
ifndef ignore-not-found
ignore-not-found = false
endif
.PHONY: install
install: manifests kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.
@out="$$( "$(KUSTOMIZE)" build config/crd 2>/dev/null || true )"; \
if [ -n "$$out" ]; then echo "$$out" | "$(KUBECTL)" apply -f -; else echo "No CRDs to install; skipping."; fi
.PHONY: uninstall
uninstall: manifests kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
@out="$$( "$(KUSTOMIZE)" build config/crd 2>/dev/null || true )"; \
if [ -n "$$out" ]; then echo "$$out" | "$(KUBECTL)" delete --ignore-not-found=$(ignore-not-found) -f -; else echo "No CRDs to delete; skipping."; fi
.PHONY: deploy
deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
cd config/manager && "$(KUSTOMIZE)" edit set image controller=${IMG}
"$(KUSTOMIZE)" build config/default | "$(KUBECTL)" apply -f -
.PHONY: undeploy
undeploy: kustomize ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
"$(KUSTOMIZE)" build config/default | "$(KUBECTL)" delete --ignore-not-found=$(ignore-not-found) -f -
##@ Dependencies
## Location to install dependencies to
LOCALBIN ?= $(shell pwd)/bin
$(LOCALBIN):
mkdir -p "$(LOCALBIN)"
## Tool Binaries
KUBECTL ?= kubectl
KIND ?= kind
KUSTOMIZE ?= $(LOCALBIN)/kustomize
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
ENVTEST ?= $(LOCALBIN)/setup-envtest
GOLANGCI_LINT = $(LOCALBIN)/golangci-lint
## Tool Versions
KUSTOMIZE_VERSION ?= v5.7.1
CONTROLLER_TOOLS_VERSION ?= v0.19.0
#ENVTEST_VERSION is the version of controller-runtime release branch to fetch the envtest setup script (i.e. release-0.20)
ENVTEST_VERSION ?= $(shell v='$(call gomodver,sigs.k8s.io/controller-runtime)'; \
[ -n "$$v" ] || { echo "Set ENVTEST_VERSION manually (controller-runtime replace has no tag)" >&2; exit 1; }; \
printf '%s\n' "$$v" | sed -E 's/^v?([0-9]+)\.([0-9]+).*/release-\1.\2/')
#ENVTEST_K8S_VERSION is the version of Kubernetes to use for setting up ENVTEST binaries (i.e. 1.31)
ENVTEST_K8S_VERSION ?= $(shell v='$(call gomodver,k8s.io/api)'; \
[ -n "$$v" ] || { echo "Set ENVTEST_K8S_VERSION manually (k8s.io/api replace has no tag)" >&2; exit 1; }; \
printf '%s\n' "$$v" | sed -E 's/^v?[0-9]+\.([0-9]+).*/1.\1/')
GOLANGCI_LINT_VERSION ?= v2.5.0
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
$(KUSTOMIZE): $(LOCALBIN)
$(call go-install-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,$(KUSTOMIZE_VERSION))
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION))
.PHONY: setup-envtest
setup-envtest: envtest ## Download the binaries required for ENVTEST in the local bin directory.
@echo "Setting up envtest binaries for Kubernetes version $(ENVTEST_K8S_VERSION)..."
@"$(ENVTEST)" use $(ENVTEST_K8S_VERSION) --bin-dir "$(LOCALBIN)" -p path || { \
echo "Error: Failed to set up envtest binaries for version $(ENVTEST_K8S_VERSION)."; \
exit 1; \
}
.PHONY: envtest
envtest: $(ENVTEST) ## Download setup-envtest locally if necessary.
$(ENVTEST): $(LOCALBIN)
$(call go-install-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,$(ENVTEST_VERSION))
.PHONY: golangci-lint
golangci-lint: $(GOLANGCI_LINT) ## Download golangci-lint locally if necessary.
$(GOLANGCI_LINT): $(LOCALBIN)
$(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/v2/cmd/golangci-lint,$(GOLANGCI_LINT_VERSION))
# go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist
# $1 - target path with name of binary
# $2 - package url which can be installed
# $3 - specific version of package
define go-install-tool
@[ -f "$(1)-$(3)" ] && [ "$$(readlink -- "$(1)" 2>/dev/null)" = "$(1)-$(3)" ] || { \
set -e; \
package=$(2)@$(3) ;\
echo "Downloading $${package}" ;\
rm -f "$(1)" ;\
GOBIN="$(LOCALBIN)" go install $${package} ;\
mv "$(LOCALBIN)/$$(basename "$(1)")" "$(1)-$(3)" ;\
} ;\
ln -sf "$$(realpath "$(1)-$(3)")" "$(1)"
endef
define gomodver
$(shell go list -m -f '{{if .Replace}}{{.Replace.Version}}{{else}}{{.Version}}{{end}}' $(1) 2>/dev/null)
endef

View File

@@ -0,0 +1,29 @@
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
cliVersion: 4.10.1
domain: appstack.io
layout:
- go.kubebuilder.io/v4
projectName: k8s-provisioner
repo: vanderlande.com/appstack/k8s-provisioner
resources:
- api:
crdVersion: v1
namespaced: true
domain: appstack.io
group: k8sprovisioner
kind: Infra
path: vanderlande.com/appstack/k8s-provisioner/api/v1alpha1
version: v1alpha1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: appstack.io
group: k8sprovisioner
kind: Cluster
path: vanderlande.com/appstack/k8s-provisioner/api/v1alpha1
version: v1alpha1
version: "3"

View File

@@ -0,0 +1,135 @@
# k8s-provisioner
// TODO(user): Add simple overview of use/purpose
## Description
// TODO(user): An in-depth paragraph about your project and overview of use
## Getting Started
### Prerequisites
- go version v1.24.6+
- docker version 17.03+.
- kubectl version v1.11.3+.
- Access to a Kubernetes v1.11.3+ cluster.
### To Deploy on the cluster
**Build and push your image to the location specified by `IMG`:**
```sh
make docker-build docker-push IMG=<some-registry>/k8s-provisioner:tag
```
**NOTE:** This image ought to be published in the personal registry you specified.
And it is required to have access to pull the image from the working environment.
Make sure you have the proper permission to the registry if the above commands dont work.
**Install the CRDs into the cluster:**
```sh
make install
```
**Deploy the Manager to the cluster with the image specified by `IMG`:**
```sh
make deploy IMG=<some-registry>/k8s-provisioner:tag
```
> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin
privileges or be logged in as admin.
**Create instances of your solution**
You can apply the samples (examples) from the config/sample:
```sh
kubectl apply -k config/samples/
```
>**NOTE**: Ensure that the samples has default values to test it out.
### To Uninstall
**Delete the instances (CRs) from the cluster:**
```sh
kubectl delete -k config/samples/
```
**Delete the APIs(CRDs) from the cluster:**
```sh
make uninstall
```
**UnDeploy the controller from the cluster:**
```sh
make undeploy
```
## Project Distribution
Following the options to release and provide this solution to the users.
### By providing a bundle with all YAML files
1. Build the installer for the image built and published in the registry:
```sh
make build-installer IMG=<some-registry>/k8s-provisioner:tag
```
**NOTE:** The makefile target mentioned above generates an 'install.yaml'
file in the dist directory. This file contains all the resources built
with Kustomize, which are necessary to install this project without its
dependencies.
2. Using the installer
Users can just run 'kubectl apply -f <URL for YAML BUNDLE>' to install
the project, i.e.:
```sh
kubectl apply -f https://raw.githubusercontent.com/<org>/k8s-provisioner/<tag or branch>/dist/install.yaml
```
### By providing a Helm Chart
1. Build the chart using the optional helm plugin
```sh
kubebuilder edit --plugins=helm/v2-alpha
```
2. See that a chart was generated under 'dist/chart', and users
can obtain this solution from there.
**NOTE:** If you change the project, you need to update the Helm Chart
using the same command above to sync the latest changes. Furthermore,
if you create webhooks, you need to use the above command with
the '--force' flag and manually ensure that any custom configuration
previously added to 'dist/chart/values.yaml' or 'dist/chart/manager/manager.yaml'
is manually re-applied afterwards.
## Contributing
// TODO(user): Add detailed information on how you would like others to contribute to this project
**NOTE:** Run `make help` for more information on all potential `make` targets
More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
## License
Copyright 2026.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,56 @@
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type ClusterSpec struct {
InfraRef string `json:"infraRef"`
KubernetesVersion string `json:"kubernetesVersion"`
ControlPlaneHA bool `json:"controlPlaneHA"`
WorkerPools []WorkerPoolRequest `json:"workerPools"`
}
type WorkerPoolRequest struct {
Name string `json:"name"`
Quantity int `json:"quantity"`
CpuCores int `json:"cpuCores"`
MemoryGB int `json:"memoryGb"`
DiskGB int `json:"diskGb"`
}
// [NEW] Struct to track the Harvester Identity
type HarvesterAccountStatus struct {
// The ServiceAccount created on Harvester (e.g. "prov-test-cluster-01")
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// The Secret created in this namespace (e.g. "harvesterconfig-test-cluster-01")
SecretRef string `json:"secretRef,omitempty"`
// Expiry for future rotation logic
TokenExpiresAt *metav1.Time `json:"tokenExpiresAt,omitempty"`
}
type ClusterStatus struct {
Ready bool `json:"ready"`
// +optional
GeneratedAccount *HarvesterAccountStatus `json:"generatedAccount,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
type Cluster struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ClusterSpec `json:"spec,omitempty"`
Status ClusterStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type ClusterList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Cluster `json:"items"`
}
func init() {
SchemeBuilder.Register(&Cluster{}, &ClusterList{})
}

View File

@@ -0,0 +1,36 @@
/*
Copyright 2026.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the k8sprovisioner v1alpha1 API group.
// +kubebuilder:object:generate=true
// +groupName=k8sprovisioner.appstack.io
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects.
GroupVersion = schema.GroupVersion{Group: "k8sprovisioner.appstack.io", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -0,0 +1,50 @@
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type InfraSpec struct {
// 1. Rancher/Cloud Settings
// The "Master" credential name in cattle-global-data
// +required
CloudCredentialSecret string `json:"cloudCredentialSecret"`
RancherURL string `json:"rancherUrl"`
// This removes the need for auto-discovery.
HarvesterURL string `json:"harvesterUrl"`
// 2. Environment Defaults
VmNamespace string `json:"vmNamespace"`
ImageName string `json:"imageName"`
NetworkName string `json:"networkName"`
SshUser string `json:"sshUser"`
// 3. Governance Configs
// +kubebuilder:validation:Optional
RKE2ConfigYAML string `json:"rke2ConfigYaml"`
// +kubebuilder:validation:Optional
UserData string `json:"userData"`
}
type InfraStatus struct {
Ready bool `json:"ready"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
type Infra struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec InfraSpec `json:"spec,omitempty"`
Status InfraStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type InfraList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Infra `json:"items"`
}
func init() {
SchemeBuilder.Register(&Infra{}, &InfraList{})
}

View File

@@ -0,0 +1,247 @@
//go:build !ignore_autogenerated
/*
Copyright 2026.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Cluster) DeepCopyInto(out *Cluster) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Cluster.
func (in *Cluster) DeepCopy() *Cluster {
if in == nil {
return nil
}
out := new(Cluster)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Cluster) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterList) DeepCopyInto(out *ClusterList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Cluster, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterList.
func (in *ClusterList) DeepCopy() *ClusterList {
if in == nil {
return nil
}
out := new(ClusterList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ClusterList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = *in
if in.WorkerPools != nil {
in, out := &in.WorkerPools, &out.WorkerPools
*out = make([]WorkerPoolRequest, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterSpec.
func (in *ClusterSpec) DeepCopy() *ClusterSpec {
if in == nil {
return nil
}
out := new(ClusterSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterStatus) DeepCopyInto(out *ClusterStatus) {
*out = *in
if in.GeneratedAccount != nil {
in, out := &in.GeneratedAccount, &out.GeneratedAccount
*out = new(HarvesterAccountStatus)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterStatus.
func (in *ClusterStatus) DeepCopy() *ClusterStatus {
if in == nil {
return nil
}
out := new(ClusterStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HarvesterAccountStatus) DeepCopyInto(out *HarvesterAccountStatus) {
*out = *in
if in.TokenExpiresAt != nil {
in, out := &in.TokenExpiresAt, &out.TokenExpiresAt
*out = (*in).DeepCopy()
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HarvesterAccountStatus.
func (in *HarvesterAccountStatus) DeepCopy() *HarvesterAccountStatus {
if in == nil {
return nil
}
out := new(HarvesterAccountStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Infra) DeepCopyInto(out *Infra) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Infra.
func (in *Infra) DeepCopy() *Infra {
if in == nil {
return nil
}
out := new(Infra)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Infra) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InfraList) DeepCopyInto(out *InfraList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Infra, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InfraList.
func (in *InfraList) DeepCopy() *InfraList {
if in == nil {
return nil
}
out := new(InfraList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *InfraList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InfraSpec) DeepCopyInto(out *InfraSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InfraSpec.
func (in *InfraSpec) DeepCopy() *InfraSpec {
if in == nil {
return nil
}
out := new(InfraSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InfraStatus) DeepCopyInto(out *InfraStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InfraStatus.
func (in *InfraStatus) DeepCopy() *InfraStatus {
if in == nil {
return nil
}
out := new(InfraStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkerPoolRequest) DeepCopyInto(out *WorkerPoolRequest) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkerPoolRequest.
func (in *WorkerPoolRequest) DeepCopy() *WorkerPoolRequest {
if in == nil {
return nil
}
out := new(WorkerPoolRequest)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,204 @@
/*
Copyright 2026.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"crypto/tls"
"flag"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
k8sprovisionerv1alpha1 "vanderlande.com/appstack/k8s-provisioner/api/v1alpha1"
"vanderlande.com/appstack/k8s-provisioner/internal/controller"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(k8sprovisionerv1alpha1.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
// nolint:gocyclo
func main() {
var metricsAddr string
var metricsCertPath, metricsCertName, metricsCertKey string
var webhookCertPath, webhookCertName, webhookCertKey string
var enableLeaderElection bool
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", true,
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
flag.StringVar(&webhookCertPath, "webhook-cert-path", "", "The directory that contains the webhook certificate.")
flag.StringVar(&webhookCertName, "webhook-cert-name", "tls.crt", "The name of the webhook certificate file.")
flag.StringVar(&webhookCertKey, "webhook-cert-key", "tls.key", "The name of the webhook key file.")
flag.StringVar(&metricsCertPath, "metrics-cert-path", "",
"The directory that contains the metrics server certificate.")
flag.StringVar(&metricsCertName, "metrics-cert-name", "tls.crt", "The name of the metrics server certificate file.")
flag.StringVar(&metricsCertKey, "metrics-cert-key", "tls.key", "The name of the metrics server key file.")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// if the enable-http2 flag is false (the default), http/2 should be disabled
// due to its vulnerabilities. More specifically, disabling http/2 will
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
// Rapid Reset CVEs. For more information see:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
// Initial webhook TLS options
webhookTLSOpts := tlsOpts
webhookServerOptions := webhook.Options{
TLSOpts: webhookTLSOpts,
}
if len(webhookCertPath) > 0 {
setupLog.Info("Initializing webhook certificate watcher using provided certificates",
"webhook-cert-path", webhookCertPath, "webhook-cert-name", webhookCertName, "webhook-cert-key", webhookCertKey)
webhookServerOptions.CertDir = webhookCertPath
webhookServerOptions.CertName = webhookCertName
webhookServerOptions.KeyName = webhookCertKey
}
webhookServer := webhook.NewServer(webhookServerOptions)
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
// More info:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.22.4/pkg/metrics/server
// - https://book.kubebuilder.io/reference/metrics.html
metricsServerOptions := metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
}
if secureMetrics {
// FilterProvider is used to protect the metrics endpoint with authn/authz.
// These configurations ensure that only authorized users and service accounts
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.22.4/pkg/metrics/filters#WithAuthenticationAndAuthorization
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
}
// If the certificate is not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
//
// TODO(user): If you enable certManager, uncomment the following lines:
// - [METRICS-WITH-CERTS] at config/default/kustomization.yaml to generate and use certificates
// managed by cert-manager for the metrics server.
// - [PROMETHEUS-WITH-CERTS] at config/prometheus/kustomization.yaml for TLS certification.
if len(metricsCertPath) > 0 {
setupLog.Info("Initializing metrics certificate watcher using provided certificates",
"metrics-cert-path", metricsCertPath, "metrics-cert-name", metricsCertName, "metrics-cert-key", metricsCertKey)
metricsServerOptions.CertDir = metricsCertPath
metricsServerOptions.CertName = metricsCertName
metricsServerOptions.KeyName = metricsCertKey
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "8a5a6d0a.appstack.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
if err := (&controller.ClusterReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Cluster")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

View File

@@ -0,0 +1,98 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.19.0
name: clusters.k8sprovisioner.appstack.io
spec:
group: k8sprovisioner.appstack.io
names:
kind: Cluster
listKind: ClusterList
plural: clusters
singular: cluster
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
controlPlaneHA:
type: boolean
infraRef:
type: string
kubernetesVersion:
type: string
workerPools:
items:
properties:
cpuCores:
type: integer
diskGb:
type: integer
memoryGb:
type: integer
name:
type: string
quantity:
type: integer
required:
- cpuCores
- diskGb
- memoryGb
- name
- quantity
type: object
type: array
required:
- controlPlaneHA
- infraRef
- kubernetesVersion
- workerPools
type: object
status:
properties:
generatedAccount:
description: '[NEW] Struct to track the Harvester Identity'
properties:
secretRef:
description: The Secret created in this namespace (e.g. "harvesterconfig-test-cluster-01")
type: string
serviceAccountName:
description: The ServiceAccount created on Harvester (e.g. "prov-test-cluster-01")
type: string
tokenExpiresAt:
description: Expiry for future rotation logic
format: date-time
type: string
type: object
ready:
type: boolean
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,84 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.19.0
name: infras.k8sprovisioner.appstack.io
spec:
group: k8sprovisioner.appstack.io
names:
kind: Infra
listKind: InfraList
plural: infras
singular: infra
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
cloudCredentialSecret:
description: |-
1. Rancher/Cloud Settings
The "Master" credential name in cattle-global-data
type: string
harvesterUrl:
description: This removes the need for auto-discovery.
type: string
imageName:
type: string
networkName:
type: string
rancherUrl:
type: string
rke2ConfigYaml:
description: 3. Governance Configs
type: string
sshUser:
type: string
userData:
type: string
vmNamespace:
description: 2. Environment Defaults
type: string
required:
- cloudCredentialSecret
- harvesterUrl
- imageName
- networkName
- rancherUrl
- sshUser
- vmNamespace
type: object
status:
properties:
ready:
type: boolean
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,17 @@
# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize package.
# It should be run by config/default
resources:
- bases/k8sprovisioner.appstack.io_infras.yaml
- bases/k8sprovisioner.appstack.io_clusters.yaml
# +kubebuilder:scaffold:crdkustomizeresource
patches:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
# +kubebuilder:scaffold:crdkustomizewebhookpatch
# [WEBHOOK] To enable webhook, uncomment the following section
# the following config is for teaching kustomize how to do kustomization for CRDs.
#configurations:
#- kustomizeconfig.yaml

View File

@@ -0,0 +1,19 @@
# This file is for teaching kustomize how to substitute name and namespace reference in CRD
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: CustomResourceDefinition
version: v1
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/name
namespace:
- kind: CustomResourceDefinition
version: v1
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/namespace
create: false
varReference:
- path: metadata/annotations

View File

@@ -0,0 +1,30 @@
# This patch adds the args, volumes, and ports to allow the manager to use the metrics-server certs.
# Add the volumeMount for the metrics-server certs
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
mountPath: /tmp/k8s-metrics-server/metrics-certs
name: metrics-certs
readOnly: true
# Add the --metrics-cert-path argument for the metrics server
- op: add
path: /spec/template/spec/containers/0/args/-
value: --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs
# Add the metrics-server certs volume configuration
- op: add
path: /spec/template/spec/volumes/-
value:
name: metrics-certs
secret:
secretName: metrics-server-cert
optional: false
items:
- key: ca.crt
path: ca.crt
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key

View File

@@ -0,0 +1,234 @@
# Adds namespace to all resources.
namespace: k8s-provisioner-system
# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the namespace
# field above.
namePrefix: k8s-provisioner-
# Labels to add to all resources and selectors.
#labels:
#- includeSelectors: true
# pairs:
# someName: someValue
resources:
- ../crd
- ../rbac
- ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
# [METRICS] Expose the controller manager metrics service.
- metrics_service.yaml
# [NETWORK POLICY] Protect the /metrics endpoint and Webhook Server with NetworkPolicy.
# Only Pod(s) running a namespace labeled with 'metrics: enabled' will be able to gather the metrics.
# Only CR(s) which requires webhooks and are applied on namespaces labeled with 'webhooks: enabled' will
# be able to communicate with the Webhook Server.
#- ../network-policy
# Uncomment the patches line if you enable Metrics
patches:
# [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443.
# More info: https://book.kubebuilder.io/reference/metrics
- path: manager_metrics_patch.yaml
target:
kind: Deployment
# Uncomment the patches line if you enable Metrics and CertManager
# [METRICS-WITH-CERTS] To enable metrics protected with certManager, uncomment the following line.
# This patch will protect the metrics with certManager self-signed certs.
#- path: cert_metrics_manager_patch.yaml
# target:
# kind: Deployment
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- path: manager_webhook_patch.yaml
# target:
# kind: Deployment
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
# Uncomment the following replacements to add the cert-manager CA injection annotations
#replacements:
# - source: # Uncomment the following block to enable certificates for metrics
# kind: Service
# version: v1
# name: controller-manager-metrics-service
# fieldPath: metadata.name
# targets:
# - select:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: metrics-certs
# fieldPaths:
# - spec.dnsNames.0
# - spec.dnsNames.1
# options:
# delimiter: '.'
# index: 0
# create: true
# - select: # Uncomment the following to set the Service name for TLS config in Prometheus ServiceMonitor
# kind: ServiceMonitor
# group: monitoring.coreos.com
# version: v1
# name: controller-manager-metrics-monitor
# fieldPaths:
# - spec.endpoints.0.tlsConfig.serverName
# options:
# delimiter: '.'
# index: 0
# create: true
# - source:
# kind: Service
# version: v1
# name: controller-manager-metrics-service
# fieldPath: metadata.namespace
# targets:
# - select:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: metrics-certs
# fieldPaths:
# - spec.dnsNames.0
# - spec.dnsNames.1
# options:
# delimiter: '.'
# index: 1
# create: true
# - select: # Uncomment the following to set the Service namespace for TLS in Prometheus ServiceMonitor
# kind: ServiceMonitor
# group: monitoring.coreos.com
# version: v1
# name: controller-manager-metrics-monitor
# fieldPaths:
# - spec.endpoints.0.tlsConfig.serverName
# options:
# delimiter: '.'
# index: 1
# create: true
# - source: # Uncomment the following block if you have any webhook
# kind: Service
# version: v1
# name: webhook-service
# fieldPath: .metadata.name # Name of the service
# targets:
# - select:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPaths:
# - .spec.dnsNames.0
# - .spec.dnsNames.1
# options:
# delimiter: '.'
# index: 0
# create: true
# - source:
# kind: Service
# version: v1
# name: webhook-service
# fieldPath: .metadata.namespace # Namespace of the service
# targets:
# - select:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPaths:
# - .spec.dnsNames.0
# - .spec.dnsNames.1
# options:
# delimiter: '.'
# index: 1
# create: true
# - source: # Uncomment the following block if you have a ValidatingWebhook (--programmatic-validation)
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # This name should match the one in certificate.yaml
# fieldPath: .metadata.namespace # Namespace of the certificate CR
# targets:
# - select:
# kind: ValidatingWebhookConfiguration
# fieldPaths:
# - .metadata.annotations.[cert-manager.io/inject-ca-from]
# options:
# delimiter: '/'
# index: 0
# create: true
# - source:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPath: .metadata.name
# targets:
# - select:
# kind: ValidatingWebhookConfiguration
# fieldPaths:
# - .metadata.annotations.[cert-manager.io/inject-ca-from]
# options:
# delimiter: '/'
# index: 1
# create: true
# - source: # Uncomment the following block if you have a DefaultingWebhook (--defaulting )
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPath: .metadata.namespace # Namespace of the certificate CR
# targets:
# - select:
# kind: MutatingWebhookConfiguration
# fieldPaths:
# - .metadata.annotations.[cert-manager.io/inject-ca-from]
# options:
# delimiter: '/'
# index: 0
# create: true
# - source:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPath: .metadata.name
# targets:
# - select:
# kind: MutatingWebhookConfiguration
# fieldPaths:
# - .metadata.annotations.[cert-manager.io/inject-ca-from]
# options:
# delimiter: '/'
# index: 1
# create: true
# - source: # Uncomment the following block if you have a ConversionWebhook (--conversion)
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPath: .metadata.namespace # Namespace of the certificate CR
# targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD.
# +kubebuilder:scaffold:crdkustomizecainjectionns
# - source:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert
# fieldPath: .metadata.name
# targets: # Do not remove or uncomment the following scaffold marker; required to generate code for target CRD.
# +kubebuilder:scaffold:crdkustomizecainjectionname

View File

@@ -0,0 +1,4 @@
# This patch adds the args to allow exposing the metrics endpoint using HTTPS
- op: add
path: /spec/template/spec/containers/0/args/0
value: --metrics-bind-address=:8443

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-service
namespace: system
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: 8443
selector:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner

View File

@@ -0,0 +1,2 @@
resources:
- manager.yaml

View File

@@ -0,0 +1,99 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
labels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
spec:
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
replicas: 1
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
spec:
# TODO(user): Uncomment the following code to configure the nodeAffinity expression
# according to the platforms which are supported by your solution.
# It is considered best practice to support multiple architectures. You can
# build your manager image using the makefile target docker-buildx.
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
# - arm64
# - ppc64le
# - s390x
# - key: kubernetes.io/os
# operator: In
# values:
# - linux
securityContext:
# Projects are configured by default to adhere to the "restricted" Pod Security Standards.
# This ensures that deployments meet the highest security requirements for Kubernetes.
# For more details, see: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- command:
- /manager
args:
- --leader-elect
- --health-probe-bind-address=:8081
image: controller:latest
name: manager
ports: []
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
# TODO(user): Configure the resources accordingly based on the project requirements.
# More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
volumeMounts: []
volumes: []
serviceAccountName: controller-manager
terminationGracePeriodSeconds: 10

View File

@@ -0,0 +1,27 @@
# This NetworkPolicy allows ingress traffic
# with Pods running on namespaces labeled with 'metrics: enabled'. Only Pods on those
# namespaces are able to gather data from the metrics endpoint.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: allow-metrics-traffic
namespace: system
spec:
podSelector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
policyTypes:
- Ingress
ingress:
# This allows ingress traffic from any namespace with the label metrics: enabled
- from:
- namespaceSelector:
matchLabels:
metrics: enabled # Only from namespaces with this label
ports:
- port: 8443
protocol: TCP

View File

@@ -0,0 +1,2 @@
resources:
- allow-metrics-traffic.yaml

View File

@@ -0,0 +1,11 @@
resources:
- monitor.yaml
# [PROMETHEUS-WITH-CERTS] The following patch configures the ServiceMonitor in ../prometheus
# to securely reference certificates created and managed by cert-manager.
# Additionally, ensure that you uncomment the [METRICS WITH CERTMANAGER] patch under config/default/kustomization.yaml
# to mount the "metrics-server-cert" secret in the Manager Deployment.
#patches:
# - path: monitor_tls_patch.yaml
# target:
# kind: ServiceMonitor

View File

@@ -0,0 +1,27 @@
# Prometheus Monitor Service (Metrics)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: controller-manager-metrics-monitor
namespace: system
spec:
endpoints:
- path: /metrics
port: https # Ensure this is the name of the port that exposes HTTPS metrics
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
# TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables
# certificate verification, exposing the system to potential man-in-the-middle attacks.
# For production environments, it is recommended to use cert-manager for automatic TLS certificate management.
# To apply this configuration, enable cert-manager and use the patch located at config/prometheus/servicemonitor_tls_patch.yaml,
# which securely references the certificate from the 'metrics-server-cert' secret.
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
app.kubernetes.io/name: k8s-provisioner

View File

@@ -0,0 +1,19 @@
# Patch for Prometheus ServiceMonitor to enable secure TLS configuration
# using certificates managed by cert-manager
- op: replace
path: /spec/endpoints/0/tlsConfig
value:
# SERVICE_NAME and SERVICE_NAMESPACE will be substituted by kustomize
serverName: SERVICE_NAME.SERVICE_NAMESPACE.svc
insecureSkipVerify: false
ca:
secret:
name: metrics-server-cert
key: ca.crt
cert:
secret:
name: metrics-server-cert
key: tls.crt
keySecret:
name: metrics-server-cert
key: tls.key

View File

@@ -0,0 +1,27 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants full permissions ('*') over k8sprovisioner.appstack.io.
# This role is intended for users authorized to modify roles and bindings within the cluster,
# enabling them to delegate specific permissions to other users or groups as needed.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: cluster-admin-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters
verbs:
- '*'
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters/status
verbs:
- get

View File

@@ -0,0 +1,33 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants permissions to create, update, and delete resources within the k8sprovisioner.appstack.io.
# This role is intended for users who need to manage these resources
# but should not control RBAC or manage permissions for others.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: cluster-editor-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters/status
verbs:
- get

View File

@@ -0,0 +1,29 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants read-only access to k8sprovisioner.appstack.io resources.
# This role is intended for users who need visibility into these resources
# without permissions to modify them. It is ideal for monitoring purposes and limited-access viewing.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: cluster-viewer-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters
verbs:
- get
- list
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters/status
verbs:
- get

View File

@@ -0,0 +1,27 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants full permissions ('*') over k8sprovisioner.appstack.io.
# This role is intended for users authorized to modify roles and bindings within the cluster,
# enabling them to delegate specific permissions to other users or groups as needed.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: infra-admin-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras
verbs:
- '*'
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras/status
verbs:
- get

View File

@@ -0,0 +1,33 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants permissions to create, update, and delete resources within the k8sprovisioner.appstack.io.
# This role is intended for users who need to manage these resources
# but should not control RBAC or manage permissions for others.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: infra-editor-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras/status
verbs:
- get

View File

@@ -0,0 +1,29 @@
# This rule is not used by the project k8s-provisioner itself.
# It is provided to allow the cluster admin to help manage permissions for users.
#
# Grants read-only access to k8sprovisioner.appstack.io resources.
# This role is intended for users who need visibility into these resources
# without permissions to modify them. It is ideal for monitoring purposes and limited-access viewing.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: infra-viewer-role
rules:
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras
verbs:
- get
- list
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras/status
verbs:
- get

View File

@@ -0,0 +1,31 @@
resources:
# All RBAC will be applied under this service account in
# the deployment namespace. You may comment out this resource
# if your manager will use a service account that exists at
# runtime. Be sure to update RoleBinding and ClusterRoleBinding
# subjects if changing service account names.
- service_account.yaml
- role.yaml
- role_binding.yaml
- leader_election_role.yaml
- leader_election_role_binding.yaml
# The following RBAC configurations are used to protect
# the metrics endpoint with authn/authz. These configurations
# ensure that only authorized users and service accounts
# can access the metrics endpoint. Comment the following
# permissions if you want to disable this protection.
# More info: https://book.kubebuilder.io/reference/metrics.html
- metrics_auth_role.yaml
- metrics_auth_role_binding.yaml
- metrics_reader_role.yaml
# For each CRD, "Admin", "Editor" and "Viewer" roles are scaffolded by
# default, aiding admins in cluster management. Those roles are
# not used by the k8s-provisioner itself. You can comment the following lines
# if you do not want those helpers be installed with your Project.
- cluster_admin_role.yaml
- cluster_editor_role.yaml
- cluster_viewer_role.yaml
- infra_admin_role.yaml
- infra_editor_role.yaml
- infra_viewer_role.yaml

View File

@@ -0,0 +1,40 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: leader-election-role
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch

View File

@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: leader-election-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system

View File

@@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metrics-auth-role
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-auth-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metrics-auth-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system

View File

@@ -0,0 +1,9 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metrics-reader
rules:
- nonResourceURLs:
- "/metrics"
verbs:
- get

View File

@@ -0,0 +1,46 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- clusters/status
verbs:
- get
- patch
- update
- apiGroups:
- k8sprovisioner.appstack.io
resources:
- infras
verbs:
- get
- list
- watch

View File

@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: controller-manager
namespace: system

View File

@@ -0,0 +1,9 @@
apiVersion: k8sprovisioner.appstack.io/v1alpha1
kind: Cluster
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: cluster-sample
spec:
# TODO(user): Add fields here

View File

@@ -0,0 +1,9 @@
apiVersion: k8sprovisioner.appstack.io/v1alpha1
kind: Infra
metadata:
labels:
app.kubernetes.io/name: k8s-provisioner
app.kubernetes.io/managed-by: kustomize
name: infra-sample
spec:
# TODO(user): Add fields here

View File

@@ -0,0 +1,5 @@
## Append samples of your project ##
resources:
- k8sprovisioner_v1alpha1_infra.yaml
- k8sprovisioner_v1alpha1_cluster.yaml
# +kubebuilder:scaffold:manifestskustomizesamples

View File

@@ -0,0 +1,161 @@
module vanderlande.com/appstack/k8s-provisioner
go 1.25.0
require (
github.com/onsi/ginkgo/v2 v2.27.2
github.com/onsi/gomega v1.38.2
gopkg.in/yaml.v3 v3.0.1
helm.sh/helm/v3 v3.19.4
k8s.io/api v0.35.0
k8s.io/apimachinery v0.35.0
k8s.io/cli-runtime v0.35.0
k8s.io/client-go v0.35.0
sigs.k8s.io/controller-runtime v0.22.4
)
require (
cel.dev/expr v0.24.0 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.4.0 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/containerd/containerd v1.7.29 // indirect
github.com/containerd/errdefs v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cyphar/filepath-securejoin v0.6.1 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.11 // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.26.0 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rubenv/sql-migrate v1.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/cast v1.7.0 // indirect
github.com/spf13/cobra v1.10.1 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/mod v0.29.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.38.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.8 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/apiextensions-apiserver v0.34.2 // indirect
k8s.io/apiserver v0.34.2 // indirect
k8s.io/component-base v0.34.2 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
k8s.io/kubectl v0.34.2 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
oras.land/oras-go/v2 v2.6.0 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/kustomize/api v0.20.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.20.1 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

Some files were not shown because too many files have changed in this diff Show More