Compare commits

...

32 Commits

Author SHA1 Message Date
Benjamin A. Petersen
96c094c901
remove 2023-09-25 12:42:01 -04:00
Benjamin A. Petersen
6a847aaec7
backup prepare-for-integration-tests.sh hacks, not sure this is all correct 2023-09-13 12:57:43 -04:00
Benjamin A. Petersen
65a54d39a5
Update deploy_packages.sh 2023-09-11 16:31:03 -04:00
Benjamin A. Petersen
d99a43bd87
revisions to make work with prepare-supervisor-on-kind.sh script 2023-09-06 14:08:04 -04:00
Benjamin A. Petersen
3a71252167
split up build.sh and deploy.sh - revised 2023-09-06 14:08:04 -04:00
Benjamin A. Petersen
ead2b3ce08
split up build.sh and deploy.sh 2023-09-06 14:08:04 -04:00
Benjamin A. Petersen
1ec75ba8ca
remove namespace from PackageRepository,Package,PackageMetadata resources 2023-09-06 14:08:04 -04:00
Benjamin A. Petersen
e297f05603
remove old unused scripts 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
233e382579
simple cleaup script 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
c82b14179a
supervisor values & schema update 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
1688210630
metadata.yml applied to package repository packages 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
296f09ed85
metadata.yml files support namespace templating 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
e873f11284
Fix: default image_repo for supervisor,concierge, packages installed in
global kapp-controller-packaging-global
2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
3097a263cd
build.sh refactor: variables for dir names, ensure consistency 2023-09-06 14:08:03 -04:00
Benjamin A. Petersen
8bea59139d
fix ytt templating error 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
02ac4d26af
fix RBAC in build.sh script 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
af04f331f3
add build.sh which outlines all steps to build & generate resources 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
9b8addef00
add package-repository directory and resources 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
c8ec432eef
add package_image to values.yaml files (sibling to image) 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
109cf0cd28
commit generated schema-openapi.yml 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
562f11d034
use package-template.yml to generate packages 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
1938b2df73
metadata.yml fix 2023-09-06 14:08:02 -04:00
Benjamin A. Petersen
c1a9fb60d0
build.yaml kbld config: pass in image_repo 2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
a801a93411
Update supervisor values.yaml to a schema doc. Make @nullable work
- see build.sh for documented script to run to generate:
   ytt --file concierge/config/values.yaml --data-values-schema-inspect --output openapi-v3 > concierge/schema-openapi.yml
2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
b861fdc148
WIP: update the schema gen for supervisor
the ./build.sh for the ytt invocation for this.
there is more work to do here, this gets us started.
many of our multiline descriptions need to be assessed.
do we want both? the description and also the schema text?
2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
0c60e31d86
Update supervisor values.yaml to a schema doc. Make @nullable work
- see build.sh for documented script to run to generate:
   ytt --file supervisor/config/values.yaml --data-values-schema-inspect --output openapi-v3 > supervisor/schema-openapi.yml
2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
1d1b98f9a1
add some hacky scripts for generating things 2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
9657719f9f
add concierge package, schema, metadata files 2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
5c75734112
add supervisor package, schema, metadata files 2023-09-06 14:08:01 -04:00
Benjamin A. Petersen
aceef06873
WIP: add .imgpkg directories 2023-09-06 14:08:00 -04:00
Benjamin A. Petersen
550673b8dd
WIP: hack in a deploy_carvel/concierge directory, but strip out the deployment for simplicity 2023-09-06 14:08:00 -04:00
Benjamin A. Petersen
715a93d64a
WIP: hack in a deploy_carvel/supervisor directory, but strip out the deployment for simplicity 2023-09-06 14:07:53 -04:00
48 changed files with 5547 additions and 0 deletions

190
deploy_carvel/build.sh Executable file
View File

@ -0,0 +1,190 @@
#!/usr/bin/env bash
set -e # immediately exit
set -u # error if variables undefined
set -o pipefail # prevent masking errors in a pipeline
# set -x # print all executed commands to terminal
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
# NOTES:
# - on images
# prepare-for-integration-tests.sh simply creates images with a "docker build" call.
# then it loads them into the kind cluster via kind load docker-image ....
# nothing fancy here.
# so, in this script, we ought be able to do the same, build images and imgpkg bundles
# and push or load them into a kind cluster.
# ./prepare-for-integration-tests.sh will pass each of these.
# optionally could be user defined.
# app is currently ignored, tag is necessary.
app="${1:-undefined}"
tag="${2:-$(uuidgen)}" # always a new tag to force K8s to reload the image on redeploy
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# TODO: final resting place for these images (PackageRepository, Packge) will need to be in the same plate as our regular images:
# https://github.com/vmware-tanzu/pinniped/releases/tag/v0.25.0
#
# from prepare-for-integration-tests.sh
# registry="pinniped.local"
# repo="test/build"
registry="docker.io/benjaminapetersen" # docker.io default
repo="pinniped-package-repo"
registry_repo="$registry/$repo"
tag=$(uuidgen) # always a new tag to force K8s to reload the image on redeploy
# can't use this straight from prepare-for-integration-tests.sh as we build several images in this file
# registry_repo_tag="${registry_repo}:${tag}"
PACKAGE_REPO_HOST="${registry_repo}"
# TODO: this variable is currently a little quirky as our values.yaml files do NOT pin pinniped to a specific
# hard-coded version. Rather, Pinniped's values.yaml allows for a passed-in version.
PINNIPED_PACKAGE_VERSION="0.25.0"
# TODO: should we copy these directories:
# - ../deploy/supervisor/config/*
# - ../deploy/concierge/config/*
# rather than duplicating the files?
# in this exercise, I have transformed the values.yaml into a "values schema" so this would have to be
# migrated up. There are some incompatibilities here, in that a values schema assesses the type of value
# by the default. currently many of the values have no actual default.
log_note "Cleaning ./package-repository to generate new..."
PACKAGE_REPOSITORY_DIR="package-repository"
rm -rf "./${PACKAGE_REPOSITORY_DIR}"
mkdir -p "./${PACKAGE_REPOSITORY_DIR}/.imgpkg"
mkdir -p "./${PACKAGE_REPOSITORY_DIR}/packages/concierge.pinniped.dev"
mkdir -p "./${PACKAGE_REPOSITORY_DIR}/packages/supervisor.pinniped.dev"
log_note "Generating PackageRepository and Packages for Pinniped version ${PINNIPED_PACKAGE_VERSION}"
declare -a arr=("supervisor" "concierge")
for resource_name in "${arr[@]}"
do
log_note "Generating for ${resource_name}..."
RESOURCE_DIR="${SCRIPT_DIR}/${resource_name}"
CONFIG_DIR="${RESOURCE_DIR}/config"
VALUES_FILE="${CONFIG_DIR}/values.yaml"
IMGPKG_IMAGES_FILE="${RESOURCE_DIR}/.imgpkg/images.yml"
SCHEMA_OPENAPI_FILE="${RESOURCE_DIR}/schema-openapi.yaml"
log_note "Generating ${resource_name} imgpkg lock file... ${IMGPKG_IMAGES_FILE}"
kbld --file "${CONFIG_DIR}" --imgpkg-lock-output "${IMGPKG_IMAGES_FILE}"
# generate a schema in each package directory
log_note "Generating ${resource_name} OpenAPIv3 Schema... ${SCHEMA_OPENAPI_FILE}"
ytt \
--file "${VALUES_FILE}" \
--data-values-schema-inspect --output openapi-v3 > "${SCHEMA_OPENAPI_FILE}"
# TODO: this is not the pattern we want.
# final resting place should be with our primary Pinniped image at:
# - projects.registry.vmware.com/pinniped/pinniped-server:v0.25.0 VMware Harbor
# - docker.io/getpinniped/pinniped-server:v0.25.0 DockerHub
package_push_repo_location="${PACKAGE_REPO_HOST}-package-${resource_name}:${PINNIPED_PACKAGE_VERSION}"
package_repo_pull_location=""
log_note "Pushing ${resource_name} package image: ${package_push_repo_location} ..."
# need to push and then pull in order to kind load it later
# imgpkg does not support a "build it locally and keep the image in your docker registry"
imgpkg push --bundle "${package_push_repo_location}" --file "${RESOURCE_DIR}"
docker pull "${package_push_repo_location}"
# TODO:
# - match prepare-for-integration-tests.sh
# - do we want this in this script, or do we want to split things apart into several scripts?
log_note "Loading Package image ${resource_name} into kind from repo ${package_push_repo_location}..."
kind load docker-image "${package_push_repo_location}" --name "${resource_name}"
resource_package_version="${resource_name}.pinniped.dev"
PACKAGE_REPO_PACKGE_FILE="${RSOURCE_DIR}/${PACKAGE_REPOSITORY_DIR}/packages/${resource_package_version}/${PINNIPED_PACKAGE_VERSION}.yml"
log_note "Generating ${resource_name} PackageRepository yaml..."
log_note "generating ${PACKAGE_REPO_PACKGE_FILE}"
ytt \
--file "${resource_name}/package-template.yml" \
--data-value-file openapi="$(pwd)/${resource_name}/schema-openapi.yml" \
--data-value package_version="${PINNIPED_PACKAGE_VERSION}" \
--data-value package_image_repo="${package_push_repo_location}" > "${PACKAGE_REPO_PACKGE_FILE}"
PACKAGE_METADATA_FILE="${RSOURCE_DIR}/${PACKAGE_REPOSITORY_DIR}/packages/${resource_package_version}/metadata.yml"
log_note "generating ${PACKAGE_METADATA_FILE}"
ytt \
--file "${resource_name}/metadata.yml" \
--data-value-file openapi="$(pwd)/${resource_name}/schema-openapi.yml" \
--data-value package_version="${PINNIPED_PACKAGE_VERSION}" \
--data-value package_image_repo="${package_push_repo_location}" > "${PACKAGE_METADATA_FILE}"
done
log_note "Generating Pinniped PackageRepository..."
log_note "Generating ${RESOURCE_DIR}/${PACKAGE_REPOSITORY_DIR}/.imgpkg/images.yml"
kbld --file "${RESOURCE_DIR}/${PACKAGE_REPOSITORY_DIR}/packages/" --imgpkg-lock-output "${PACKAGE_REPOSITORY_DIR}/.imgpkg/images.yml"
package_repository_push_repo_location="${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}"
log_note "Pushing Pinniped package repository image: ${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}..."
imgpkg push --bundle "${package_repository_push_repo_location}" --file "./${PACKAGE_REPOSITORY_DIR}"
docker pull "${package_repository_push_repo_location}"
# handy for a quick debug
# log_note "Validating imgpkg package bundle contents..."
# imgpkg pull --bundle "${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}" --output "/tmp/${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}"
# ls -la "/tmp/${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}"
log_note "Generating PackageRepository yaml file..."
PINNIPED_PACKGE_REPOSITORY_NAME="pinniped-package-repository"
PINNIPED_PACKGE_REPOSITORY_FILE="packagerepository.${PINNIPED_PACKAGE_VERSION}.yml"
# TODO:
# - match prepare-for-integration-tests.sh
# - do we want this in this script, or do we want to split things apart into several scripts?
log_note "Loading PackageRepository PINNIPED_PACKGE_REPOSITORY_NAME into kind from repo ${package_repository_push_repo_location}..."
kind load docker-image "${package_repository_push_repo_location}" --name "${PINNIPED_PACKGE_REPOSITORY_NAME}"
echo -n "" > "${PINNIPED_PACKGE_REPOSITORY_FILE}"
cat <<EOT >> "${PINNIPED_PACKGE_REPOSITORY_FILE}"
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageRepository
metadata:
name: "${PINNIPED_PACKGE_REPOSITORY_NAME}"
spec:
fetch:
imgpkgBundle:
image: "${PACKAGE_REPO_HOST}:${PINNIPED_PACKAGE_VERSION}"
EOT
log_note "To deploy the PackageRepository, run 'kapp deploy --app pinniped-repo --file ${PINNIPED_PACKGE_REPOSITORY_FILE}'"
log_note "Or use the sibling deploy.sh script"

View File

@ -0,0 +1,3 @@
---
apiVersion: imgpkg.carvel.dev/v1alpha1
kind: ImagesLock

View File

@ -0,0 +1,23 @@
#@ load("@ytt:data", "data")
---
apiVersion: kbld.k14s.io/v1alpha1
kind: Config
minimumRequiredVersion: 0.31.0 #! minimum version of kbld. We probably don't need to specify.
overrides:
#! TODO: in the pinniped yamls, this is provided by values.yaml, not declared in the deployment.
#! we should assess if we want to leave it there or move it to this form of configuration.
- image: projects.registry.vmware.com/pinniped/pinniped-server
newImage: #@ data.values.image_repo
#! I don't think we need any of these (until we need them 😊). IE, don't use prematurely.
#! searchRules: ... # for searching input files to find container images
#! overrides: ... # overrides to apply to container images before resolving or building
#! sources: ... # source/content of a container image
#! destinations: ... # where to push built images
#!
#!
#! source: TODO: we may need this at least to specify that we want kbld to build
#! a set of container images that are found in our package config yaml files.

View File

@ -0,0 +1,3 @@
# Pinniped Concierge Deployment
See [the how-to guide for details](https://pinniped.dev/docs/howto/install-concierge/).

View File

@ -0,0 +1,176 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: jwtauthenticators.authentication.concierge.pinniped.dev
spec:
group: authentication.concierge.pinniped.dev
names:
categories:
- pinniped
- pinniped-authenticator
- pinniped-authenticators
kind: JWTAuthenticator
listKind: JWTAuthenticatorList
plural: jwtauthenticators
singular: jwtauthenticator
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.issuer
name: Issuer
type: string
- jsonPath: .spec.audience
name: Audience
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: "JWTAuthenticator describes the configuration of a JWT authenticator.
\n Upon receiving a signed JWT, a JWTAuthenticator will performs some validation
on it (e.g., valid signature, existence of claims, etc.) and extract the
username and groups from the token."
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec for configuring the authenticator.
properties:
audience:
description: Audience is the required value of the "aud" JWT claim.
minLength: 1
type: string
claims:
description: Claims allows customization of the claims that will be
mapped to user identity for Kubernetes access.
properties:
groups:
description: Groups is the name of the claim which should be read
to extract the user's group membership from the JWT token. When
not specified, it will default to "groups".
type: string
username:
description: Username is the name of the claim which should be
read to extract the username from the JWT token. When not specified,
it will default to "username".
type: string
type: object
issuer:
description: Issuer is the OIDC issuer URL that will be used to discover
public signing keys. Issuer is also used to validate the "iss" JWT
claim.
minLength: 1
pattern: ^https://
type: string
tls:
description: TLS configuration for communicating with the OIDC provider.
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM bundle).
If omitted, a default set of system roots will be trusted.
type: string
type: object
required:
- audience
- issuer
type: object
status:
description: Status of the authenticator.
properties:
conditions:
description: Represents the observations of the authenticator's current
state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,149 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: webhookauthenticators.authentication.concierge.pinniped.dev
spec:
group: authentication.concierge.pinniped.dev
names:
categories:
- pinniped
- pinniped-authenticator
- pinniped-authenticators
kind: WebhookAuthenticator
listKind: WebhookAuthenticatorList
plural: webhookauthenticators
singular: webhookauthenticator
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.endpoint
name: Endpoint
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: WebhookAuthenticator describes the configuration of a webhook
authenticator.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec for configuring the authenticator.
properties:
endpoint:
description: Webhook server endpoint URL.
minLength: 1
pattern: ^https://
type: string
tls:
description: TLS configuration.
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM bundle).
If omitted, a default set of system roots will be trusted.
type: string
type: object
required:
- endpoint
type: object
status:
description: Status of the authenticator.
properties:
conditions:
description: Represents the observations of the authenticator's current
state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,264 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: credentialissuers.config.concierge.pinniped.dev
spec:
group: config.concierge.pinniped.dev
names:
categories:
- pinniped
kind: CredentialIssuer
listKind: CredentialIssuerList
plural: credentialissuers
singular: credentialissuer
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.impersonationProxy.mode
name: ProxyMode
type: string
- jsonPath: .status.strategies[?(@.status == "Success")].type
name: DefaultStrategy
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: CredentialIssuer describes the configuration and status of the
Pinniped Concierge credential issuer.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec describes the intended configuration of the Concierge.
properties:
impersonationProxy:
description: ImpersonationProxy describes the intended configuration
of the Concierge impersonation proxy.
properties:
externalEndpoint:
description: "ExternalEndpoint describes the HTTPS endpoint where
the proxy will be exposed. If not set, the proxy will be served
using the external name of the LoadBalancer service or the cluster
service DNS name. \n This field must be non-empty when spec.impersonationProxy.service.type
is \"None\"."
type: string
mode:
description: 'Mode configures whether the impersonation proxy
should be started: - "disabled" explicitly disables the impersonation
proxy. This is the default. - "enabled" explicitly enables the
impersonation proxy. - "auto" enables or disables the impersonation
proxy based upon the cluster in which it is running.'
enum:
- auto
- enabled
- disabled
type: string
service:
default:
type: LoadBalancer
description: Service describes the configuration of the Service
provisioned to expose the impersonation proxy to clients.
properties:
annotations:
additionalProperties:
type: string
description: Annotations specifies zero or more key/value
pairs to set as annotations on the provisioned Service.
type: object
loadBalancerIP:
description: LoadBalancerIP specifies the IP address to set
in the spec.loadBalancerIP field of the provisioned Service.
This is not supported on all cloud providers.
maxLength: 255
minLength: 1
type: string
type:
default: LoadBalancer
description: "Type specifies the type of Service to provision
for the impersonation proxy. \n If the type is \"None\",
then the \"spec.impersonationProxy.externalEndpoint\" field
must be set to a non-empty value so that the Concierge can
properly advertise the endpoint in the CredentialIssuer's
status."
enum:
- LoadBalancer
- ClusterIP
- None
type: string
type: object
tls:
description: "TLS contains information about how the Concierge
impersonation proxy should serve TLS. \n If this field is empty,
the impersonation proxy will generate its own TLS certificate."
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM
bundle). Used to advertise the CA bundle for the impersonation
proxy endpoint.
type: string
secretName:
description: SecretName is the name of a Secret in the same
namespace, of type `kubernetes.io/tls`, which contains the
TLS serving certificate for the Concierge impersonation
proxy endpoint.
minLength: 1
type: string
type: object
required:
- mode
- service
type: object
required:
- impersonationProxy
type: object
status:
description: CredentialIssuerStatus describes the status of the Concierge.
properties:
kubeConfigInfo:
description: Information needed to form a valid Pinniped-based kubeconfig
using this credential issuer. This field is deprecated and will
be removed in a future version.
properties:
certificateAuthorityData:
description: The K8s API server CA bundle.
minLength: 1
type: string
server:
description: The K8s API server URL.
minLength: 1
pattern: ^https://|^http://
type: string
required:
- certificateAuthorityData
- server
type: object
strategies:
description: List of integration strategies that were attempted by
Pinniped.
items:
description: CredentialIssuerStrategy describes the status of an
integration strategy that was attempted by Pinniped.
properties:
frontend:
description: Frontend describes how clients can connect using
this strategy.
properties:
impersonationProxyInfo:
description: ImpersonationProxyInfo describes the parameters
for the impersonation proxy on this Concierge. This field
is only set when Type is "ImpersonationProxy".
properties:
certificateAuthorityData:
description: CertificateAuthorityData is the base64-encoded
PEM CA bundle of the impersonation proxy.
minLength: 1
type: string
endpoint:
description: Endpoint is the HTTPS endpoint of the impersonation
proxy.
minLength: 1
pattern: ^https://
type: string
required:
- certificateAuthorityData
- endpoint
type: object
tokenCredentialRequestInfo:
description: TokenCredentialRequestAPIInfo describes the
parameters for the TokenCredentialRequest API on this
Concierge. This field is only set when Type is "TokenCredentialRequestAPI".
properties:
certificateAuthorityData:
description: CertificateAuthorityData is the base64-encoded
Kubernetes API server CA bundle.
minLength: 1
type: string
server:
description: Server is the Kubernetes API server URL.
minLength: 1
pattern: ^https://|^http://
type: string
required:
- certificateAuthorityData
- server
type: object
type:
description: Type describes which frontend mechanism clients
can use with a strategy.
enum:
- TokenCredentialRequestAPI
- ImpersonationProxy
type: string
required:
- type
type: object
lastUpdateTime:
description: When the status was last checked.
format: date-time
type: string
message:
description: Human-readable description of the current status.
minLength: 1
type: string
reason:
description: Reason for the current status.
enum:
- Listening
- Pending
- Disabled
- ErrorDuringSetup
- CouldNotFetchKey
- CouldNotGetClusterInfo
- FetchedKey
type: string
status:
description: Status of the attempted integration strategy.
enum:
- Success
- Error
type: string
type:
description: Type of integration attempted.
enum:
- KubeClusterSigningCertificate
- ImpersonationProxy
type: string
required:
- lastUpdateTime
- message
- reason
- status
- type
type: object
type: array
required:
- strategies
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,360 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("@ytt:json", "json")
#@ load("helpers.lib.yaml", "defaultLabel", "labels", "deploymentPodLabel", "namespace", "defaultResourceName", "defaultResourceNameWithSuffix", "getAndValidateLogLevel", "pinnipedDevAPIGroupWithPrefix")
#@ load("@ytt:template", "template")
#@ if not data.values.into_namespace:
---
apiVersion: v1
kind: Namespace
metadata:
name: #@ data.values.namespace
labels:
_: #@ template.replace(labels())
#! When deploying onto a cluster which has PSAs enabled by default for namespaces,
#! effectively disable them for this namespace. The kube-cert-agent Deployment's pod
#! created by the Concierge in this namespace needs to be able to perform privileged
#! actions. The regular Concierge pod containers created by the Deployment below do
#! not need special privileges and are marked as such in their securityContext settings.
pod-security.kubernetes.io/enforce: privileged
#@ end
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: #@ defaultResourceNameWithSuffix("kube-cert-agent")
namespace: #@ namespace()
labels: #@ labels()
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
namespace: #@ namespace()
labels: #@ labels()
annotations:
#! we need to create this service account before we create the secret
kapp.k14s.io/change-group: "impersonation-proxy.concierge.pinniped.dev/serviceaccount"
secrets: #! make sure the token controller does not create any other secrets
- name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
---
apiVersion: v1
kind: ConfigMap
metadata:
name: #@ defaultResourceNameWithSuffix("config")
namespace: #@ namespace()
labels: #@ labels()
data:
#! If names.apiService is changed in this ConfigMap, must also change name of the ClusterIP Service resource below.
#@yaml/text-templated-strings
pinniped.yaml: |
discovery:
url: (@= data.values.discovery_url or "null" @)
api:
servingCertificate:
durationSeconds: (@= str(data.values.api_serving_certificate_duration_seconds) @)
renewBeforeSeconds: (@= str(data.values.api_serving_certificate_renew_before_seconds) @)
apiGroupSuffix: (@= data.values.api_group_suffix @)
# aggregatedAPIServerPort may be set here, although other YAML references to the default port (10250) may also need to be updated
# impersonationProxyServerPort may be set here, although other YAML references to the default port (8444) may also need to be updated
names:
servingCertificateSecret: (@= defaultResourceNameWithSuffix("api-tls-serving-certificate") @)
credentialIssuer: (@= defaultResourceNameWithSuffix("config") @)
apiService: (@= defaultResourceNameWithSuffix("api") @)
impersonationLoadBalancerService: (@= defaultResourceNameWithSuffix("impersonation-proxy-load-balancer") @)
impersonationClusterIPService: (@= defaultResourceNameWithSuffix("impersonation-proxy-cluster-ip") @)
impersonationTLSCertificateSecret: (@= defaultResourceNameWithSuffix("impersonation-proxy-tls-serving-certificate") @)
impersonationCACertificateSecret: (@= defaultResourceNameWithSuffix("impersonation-proxy-ca-certificate") @)
impersonationSignerSecret: (@= defaultResourceNameWithSuffix("impersonation-proxy-signer-ca-certificate") @)
agentServiceAccount: (@= defaultResourceNameWithSuffix("kube-cert-agent") @)
labels: (@= json.encode(labels()).rstrip() @)
kubeCertAgent:
namePrefix: (@= defaultResourceNameWithSuffix("kube-cert-agent-") @)
(@ if data.values.kube_cert_agent_image: @)
image: (@= data.values.kube_cert_agent_image @)
(@ else: @)
(@ if data.values.image_digest: @)
image: (@= data.values.image_repo + "@" + data.values.image_digest @)
(@ else: @)
image: (@= data.values.image_repo + ":" + data.values.image_tag @)
(@ end @)
(@ end @)
(@ if data.values.image_pull_dockerconfigjson: @)
imagePullSecrets:
- image-pull-secret
(@ end @)
(@ if data.values.log_level or data.values.deprecated_log_format: @)
log:
(@ if data.values.log_level: @)
level: (@= getAndValidateLogLevel() @)
(@ end @)
(@ if data.values.deprecated_log_format: @)
format: (@= data.values.deprecated_log_format @)
(@ end @)
(@ end @)
---
#@ if data.values.image_pull_dockerconfigjson and data.values.image_pull_dockerconfigjson != "":
apiVersion: v1
kind: Secret
metadata:
name: image-pull-secret
namespace: #@ namespace()
labels: #@ labels()
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: #@ data.values.image_pull_dockerconfigjson
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
spec:
replicas: #@ data.values.replicas
selector:
#! In hindsight, this should have been deploymentPodLabel(), but this field is immutable so changing it would break upgrades.
matchLabels: #@ defaultLabel()
template:
metadata:
labels:
#! This has always included defaultLabel(), which is used by this Deployment's selector.
_: #@ template.replace(defaultLabel())
#! More recently added the more unique deploymentPodLabel() so Services can select these Pods more specifically
#! without accidentally selecting any other Deployment's Pods, especially the kube cert agent Deployment's Pods.
_: #@ template.replace(deploymentPodLabel())
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
securityContext:
runAsUser: #@ data.values.run_as_user
runAsGroup: #@ data.values.run_as_group
serviceAccountName: #@ defaultResourceName()
#@ if data.values.image_pull_dockerconfigjson and data.values.image_pull_dockerconfigjson != "":
imagePullSecrets:
- name: image-pull-secret
#@ end
containers:
- name: #@ defaultResourceName()
#@ if data.values.image_digest:
image: #@ data.values.image_repo + "@" + data.values.image_digest
#@ else:
image: #@ data.values.image_repo + ":" + data.values.image_tag
#@ end
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
#! seccompProfile was introduced in Kube v1.19. Using it on an older Kube version will result in a
#! kubectl validation error when installing via `kubectl apply`, which can be ignored using kubectl's
#! `--validate=false` flag. Note that installing via `kapp` does not complain about this validation error.
seccompProfile:
type: "RuntimeDefault"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "100m"
memory: "128Mi"
command:
- pinniped-concierge
- --config=/etc/config/pinniped.yaml
- --downward-api-path=/etc/podinfo
volumeMounts:
- name: tmp
mountPath: /tmp
- name: config-volume
mountPath: /etc/config
readOnly: true
- name: podinfo
mountPath: /etc/podinfo
readOnly: true
- name: impersonation-proxy
mountPath: /var/run/secrets/impersonation-proxy.concierge.pinniped.dev/serviceaccount
readOnly: true
env:
#@ if data.values.https_proxy:
- name: HTTPS_PROXY
value: #@ data.values.https_proxy
#@ end
#@ if data.values.https_proxy and data.values.no_proxy:
- name: NO_PROXY
value: #@ data.values.no_proxy
#@ end
livenessProbe:
httpGet:
path: /healthz
port: 10250
scheme: HTTPS
initialDelaySeconds: 2
timeoutSeconds: 15
periodSeconds: 10
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10250
scheme: HTTPS
initialDelaySeconds: 2
timeoutSeconds: 3
periodSeconds: 10
failureThreshold: 3
volumes:
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 100Mi
- name: config-volume
configMap:
name: #@ defaultResourceNameWithSuffix("config")
- name: impersonation-proxy
secret:
secretName: #@ defaultResourceNameWithSuffix("impersonation-proxy")
items: #! make sure our pod does not start until the token controller has a chance to populate the secret
- key: token
path: token
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "name"
fieldRef:
fieldPath: metadata.name
- path: "namespace"
fieldRef:
fieldPath: metadata.namespace
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master #! Allow running on master nodes too (name deprecated by kubernetes 1.20).
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane #! The new name for these nodes as of Kubernetes 1.24.
effect: NoSchedule
#! "system-cluster-critical" cannot be used outside the kube-system namespace until Kubernetes >= 1.17,
#! so we skip setting this for now (see https://github.com/kubernetes/kubernetes/issues/60596).
#!priorityClassName: system-cluster-critical
#! This will help make sure our multiple pods run on different nodes, making
#! our deployment "more" "HA".
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector:
matchLabels: #@ deploymentPodLabel()
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
#! If name is changed, must also change names.apiService in the ConfigMap above and spec.service.name in the APIService below.
name: #@ defaultResourceNameWithSuffix("api")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: ClusterIP
selector: #@ deploymentPodLabel()
ports:
- protocol: TCP
port: 443
targetPort: 10250
---
apiVersion: v1
kind: Service
metadata:
name: #@ defaultResourceNameWithSuffix("proxy")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: ClusterIP
selector: #@ deploymentPodLabel()
ports:
- protocol: TCP
port: 443
targetPort: 8444
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: #@ pinnipedDevAPIGroupWithPrefix("v1alpha1.login.concierge")
labels: #@ labels()
spec:
version: v1alpha1
group: #@ pinnipedDevAPIGroupWithPrefix("login.concierge")
groupPriorityMinimum: 9900
versionPriority: 15
#! caBundle: Do not include this key here. Starts out null, will be updated/owned by the golang code.
service:
name: #@ defaultResourceNameWithSuffix("api")
namespace: #@ namespace()
port: 443
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: #@ pinnipedDevAPIGroupWithPrefix("v1alpha1.identity.concierge")
labels: #@ labels()
spec:
version: v1alpha1
group: #@ pinnipedDevAPIGroupWithPrefix("identity.concierge")
groupPriorityMinimum: 9900
versionPriority: 15
#! caBundle: Do not include this key here. Starts out null, will be updated/owned by the golang code.
service:
name: #@ defaultResourceNameWithSuffix("api")
namespace: #@ namespace()
port: 443
---
apiVersion: #@ pinnipedDevAPIGroupWithPrefix("config.concierge") + "/v1alpha1"
kind: CredentialIssuer
metadata:
name: #@ defaultResourceNameWithSuffix("config")
labels: #@ labels()
spec:
impersonationProxy:
mode: #@ data.values.impersonation_proxy_spec.mode
#@ if data.values.impersonation_proxy_spec.external_endpoint:
externalEndpoint: #@ data.values.impersonation_proxy_spec.external_endpoint
#@ end
service:
type: #@ data.values.impersonation_proxy_spec.service.type
#@ if data.values.impersonation_proxy_spec.service.load_balancer_ip:
loadBalancerIP: #@ data.values.impersonation_proxy_spec.service.load_balancer_ip
#@ end
annotations: #@ data.values.impersonation_proxy_spec.service.annotations
---
apiVersion: v1
kind: Secret
metadata:
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
namespace: #@ namespace()
labels: #@ labels()
annotations:
#! wait until the SA exists to create this secret so that the token controller does not delete it
#! we have this secret at the end so that kubectl will create the service account first
kapp.k14s.io/change-rule: "upsert after upserting impersonation-proxy.concierge.pinniped.dev/serviceaccount"
kubernetes.io/service-account.name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
type: kubernetes.io/service-account-token

View File

@ -0,0 +1,47 @@
#! Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
#@ def defaultResourceName():
#@ return data.values.app_name
#@ end
#@ def defaultResourceNameWithSuffix(suffix):
#@ return data.values.app_name + "-" + suffix
#@ end
#@ def pinnipedDevAPIGroupWithPrefix(prefix):
#@ return prefix + "." + data.values.api_group_suffix
#@ end
#@ def namespace():
#@ if data.values.into_namespace:
#@ return data.values.into_namespace
#@ else:
#@ return data.values.namespace
#@ end
#@ end
#@ def defaultLabel():
#! Note that the name of this label's key is also assumed by kubecertagent.go and impersonator_config.go
app: #@ data.values.app_name
#@ end
#@ def deploymentPodLabel():
deployment.pinniped.dev: concierge
#@ end
#@ def labels():
_: #@ template.replace(defaultLabel())
_: #@ template.replace(data.values.custom_labels)
#@ end
#@ def getAndValidateLogLevel():
#@ log_level = data.values.log_level
#@ if log_level != "info" and log_level != "debug" and log_level != "trace" and log_level != "all":
#@ fail("log_level '" + log_level + "' is invalid")
#@ end
#@ return log_level
#@ end

View File

@ -0,0 +1,296 @@
#! Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("helpers.lib.yaml", "labels", "namespace", "defaultResourceName", "defaultResourceNameWithSuffix", "pinnipedDevAPIGroupWithPrefix")
#! Give permission to various cluster-scoped objects
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ namespaces ]
verbs: [ get, list, watch ]
- apiGroups: [ apiregistration.k8s.io ]
resources: [ apiservices ]
verbs: [ get, list, patch, update, watch ]
- apiGroups: [ admissionregistration.k8s.io ]
resources: [ validatingwebhookconfigurations, mutatingwebhookconfigurations ]
verbs: [ get, list, watch ]
- apiGroups: [ flowcontrol.apiserver.k8s.io ]
resources: [ flowschemas, prioritylevelconfigurations ]
verbs: [ get, list, watch ]
- apiGroups: [ security.openshift.io ]
resources: [ securitycontextconstraints ]
verbs: [ use ]
resourceNames: [ nonroot ]
- apiGroups: [ "" ]
resources: [ nodes ]
verbs: [ list ]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.concierge")
resources: [ credentialissuers ]
verbs: [ get, list, watch, create ]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.concierge")
resources: [ credentialissuers/status ]
verbs: [ get, patch, update ]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("authentication.concierge")
resources: [ jwtauthenticators, webhookauthenticators ]
verbs: [ get, list, watch ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: ClusterRole
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
apiGroup: rbac.authorization.k8s.io
#! Give minimal permissions to impersonation proxy service account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ "users", "groups", "serviceaccounts" ]
verbs: [ "impersonate" ]
- apiGroups: [ "authentication.k8s.io" ]
resources: [ "*" ] #! What we really want is userextras/* but the RBAC authorizer only supports */subresource, not resource/*
verbs: [ "impersonate" ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
namespace: #@ namespace()
roleRef:
kind: ClusterRole
name: #@ defaultResourceNameWithSuffix("impersonation-proxy")
apiGroup: rbac.authorization.k8s.io
#! Give permission to the kube-cert-agent Pod to run privileged.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: #@ defaultResourceNameWithSuffix("kube-cert-agent")
namespace: #@ namespace()
labels: #@ labels()
rules:
- apiGroups: [ policy ]
resources: [ podsecuritypolicies ]
verbs: [ use ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("kube-cert-agent")
namespace: #@ namespace()
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceNameWithSuffix("kube-cert-agent")
namespace: #@ namespace()
roleRef:
kind: Role
name: #@ defaultResourceNameWithSuffix("kube-cert-agent")
apiGroup: rbac.authorization.k8s.io
#! Give permission to various objects within the app's own namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
namespace: #@ namespace()
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ services ]
verbs: [ create, get, list, patch, update, watch, delete ]
- apiGroups: [ "" ]
resources: [ secrets ]
verbs: [ create, get, list, patch, update, watch, delete ]
#! We need to be able to watch pods in our namespace so we can find the kube-cert-agent pods.
- apiGroups: [ "" ]
resources: [ pods ]
verbs: [ get, list, watch ]
#! We need to be able to exec into pods in our namespace so we can grab the API server's private key
- apiGroups: [ "" ]
resources: [ pods/exec ]
verbs: [ create ]
#! We need to be able to delete pods in our namespace so we can clean up legacy kube-cert-agent pods.
- apiGroups: [ "" ]
resources: [ pods ]
verbs: [ delete ]
#! We need to be able to create and update deployments in our namespace so we can manage the kube-cert-agent Deployment.
- apiGroups: [ apps ]
resources: [ deployments ]
verbs: [ create, get, list, patch, update, watch, delete ]
#! We need to be able to get replicasets so we can form the correct owner references on our generated objects.
- apiGroups: [ apps ]
resources: [ replicasets ]
verbs: [ get ]
- apiGroups: [ "" ]
resources: [ configmaps ]
verbs: [ list, get, watch ]
- apiGroups: [ coordination.k8s.io ]
resources: [ leases ]
verbs: [ create, get, update ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
namespace: #@ namespace()
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
apiGroup: rbac.authorization.k8s.io
#! Give permission to read pods in the kube-system namespace so we can find the API server's private key
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: #@ defaultResourceNameWithSuffix("kube-system-pod-read")
namespace: kube-system
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ pods ]
verbs: [ get, list, watch ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("kube-system-pod-read")
namespace: kube-system
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: #@ defaultResourceNameWithSuffix("kube-system-pod-read")
apiGroup: rbac.authorization.k8s.io
#! Allow both authenticated and unauthenticated TokenCredentialRequests (i.e. allow all requests)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: #@ defaultResourceNameWithSuffix("pre-authn-apis")
labels: #@ labels()
rules:
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("login.concierge")
resources: [ tokencredentialrequests ]
verbs: [ create, list ]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("identity.concierge")
resources: [ whoamirequests ]
verbs: [ create, list ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("pre-authn-apis")
labels: #@ labels()
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: #@ defaultResourceNameWithSuffix("pre-authn-apis")
apiGroup: rbac.authorization.k8s.io
#! Give permissions for subjectaccessreviews, tokenreview that is needed by aggregated api servers
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceName()
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: ClusterRole
name: system:auth-delegator
apiGroup: rbac.authorization.k8s.io
#! Give permissions for a special configmap of CA bundles that is needed by aggregated api servers
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("extension-apiserver-authentication-reader")
namespace: kube-system
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: extension-apiserver-authentication-reader
apiGroup: rbac.authorization.k8s.io
#! Give permission to list and watch ConfigMaps in kube-public
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("cluster-info-lister-watcher")
namespace: kube-public
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ configmaps ]
verbs: [ list, watch ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("cluster-info-lister-watcher")
namespace: kube-public
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: #@ defaultResourceNameWithSuffix("cluster-info-lister-watcher")
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,123 @@
#! Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@data/values-schema
---
#@schema/desc "Namespace of pinniped-concierge"
app_name: pinniped-concierge
#@schema/desc "Creates a new namespace statically in yaml with the given name and installs the app into that namespace."
namespace: pinniped-concierge
#! If specified, assumes that a namespace of the given name already exists and installs the app into that namespace.
#! If both `namespace` and `into_namespace` are specified, then only `into_namespace` is used.
#@schema/desc "Overrides namespace. This is actually confusingly worded. TODO: CAN WE REWRITE THIS ONE???"
#@schema/nullable
into_namespace: my-preexisting-namespace
#! All resources created statically by yaml at install-time and all resources created dynamically
#! by controllers at runtime will be labelled with `app: $app_name` and also with the labels
#! specified here. The value of `custom_labels` must be a map of string keys to string values.
#! The app can be uninstalled either by:
#! 1. Deleting the static install-time yaml resources including the static namespace, which will cascade and also delete
#! resources that were dynamically created by controllers at runtime
#! 2. Or, deleting all resources by label, which does not assume that there was a static install-time yaml namespace.
#@schema/desc "All resources created statically by yaml at install-time and all resources created dynamically by controllers at runtime will be labelled with `app: $app_name` and also with the labels specified here."
custom_labels: {} #! e.g. {myCustomLabelName: myCustomLabelValue, otherCustomLabelName: otherCustomLabelValue}
#! Specify how many replicas of the Pinniped server to run.
replicas: 2
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
image_repo: projects.registry.vmware.com/pinniped/pinniped-server
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
#@schema/nullable
image_digest: sha256:f3c4fdfd3ef865d4b97a1fd295d94acc3f0c654c46b6f27ffad5cf80216903c8
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
image_tag: latest
#! TODO: preserve even if this file is revised!
#@schema/nullable
package_image_repo: docker.io/benjaminapetersen/some-pinniped-concierge-package
#@schema/nullable
package_image_digest: sha256:123456
#@schema/nullable
package_image_tag: latest
#! prob should not be nullable
#@schema/nullable
package_version: 1.2.3
#@schema/desc 'Optionally specify a different image for the "kube-cert-agent" pod which is scheduled on the control plane. This image needs only to include `sleep` and `cat` binaries. By default, the same image specified for image_repo/image_digest/image_tag will be re-used.'
kube_cert_agent_image: projects.registry.vmware.com/pinniped/pinniped-server
#! Specifies a secret to be used when pulling the above `image_repo` container image.
#! Can be used when the above image_repo is a private registry.
#! Typically the value would be the output of: kubectl create secret docker-registry x --docker-server=https://example.io --docker-username="USERNAME" --docker-password="PASSWORD" --dry-run=client -o json | jq -r '.data[".dockerconfigjson"]'
#! Optional.
#@schema/desc "Specifies a secret to be used when pulling the above `image_repo` container image. Can be used when the image_repo is a private registry."
#@schema/nullable
image_pull_dockerconfigjson: {"auths":{"https://registry.example.com":{"username":"USERNAME","password":"PASSWORD","auth":"BASE64_ENCODED_USERNAME_COLON_PASSWORD"}}}
#! Optional.
#@schema/desc "Pinniped will try to guess the right K8s API URL for sharing that information with potential clients. This setting allows the guess to be overridden."
#@schema/nullable
discovery_url: https://example.com
#@schema/desc "Specify the duration and renewal interval for the API serving certificate. The defaults are set to expire the cert about every 30 days, and to rotate it about every 25 days."
api_serving_certificate_duration_seconds: 2592000
api_serving_certificate_renew_before_seconds: 2160000
#@schema/desc 'Specify the verbosity of logging: info ("nice to know" information), debug (developer information), trace (timing information), or all (kitchen sink). Do not use trace or all on production systems, as credentials may get logged.'
#@schema/nullable
log_level: info #! By default, when this value is left unset, only warnings and errors are printed. There is no way to suppress warning and error logs.
#@schema/desc "Specify the format of logging: json (for machine parsable logs) and text (for legacy klog formatted logs). By default, when this value is left unset, logs are formatted in json. This configuration is deprecated and will be removed in a future release at which point logs will always be formatted as json."
#@schema/nullable
deprecated_log_format: json
#@schema/desc "run_as_user specifies the user ID that will own the process, see the Dockerfile for the reasoning behind this choice"
run_as_user: 65532
#@schema/desc "run_as_group specifies the group ID that will own the process, see the Dockerfile for the reasoning behind this choice"
run_as_group: 65532
#@schema/desc "Specify the API group suffix for all Pinniped API groups. By default, this is set to pinniped.dev, so Pinniped API groups will look like foo.pinniped.dev, authentication.concierge.pinniped.dev, etc. As an example, if this is set to tuna.io, then Pinniped API groups will look like foo.tuna.io. authentication.concierge.tuna.io, etc."
api_group_suffix: pinniped.dev
#@schema/desc "Customize CredentialIssuer.spec.impersonationProxy to change how the concierge handles impersonation."
impersonation_proxy_spec:
#! options are "auto", "disabled" or "enabled".
#! If auto, the impersonation proxy will run only if the cluster signing key is not available
#! and the other strategy does not work.
#! If disabled, the impersonation proxy will never run, which could mean that the concierge
#! doesn't work at all.
#! If enabled, the impersonation proxy will always run regardless of other strategies available.
#@schema/desc 'options are "auto", "disabled" or "enabled".'
mode: auto
#! The endpoint which the client should use to connect to the impersonation proxy.
#! If left unset, the client will default to connecting based on the ClusterIP or LoadBalancer
#! endpoint.
#@schema/desc "The endpoint which the client should use to connect to the impersonation proxy."
external_endpoint: http://example.com
service:
#! Options are "LoadBalancer", "ClusterIP" and "None".
#! LoadBalancer automatically provisions a Service of type LoadBalancer pointing at
#! the impersonation proxy. Some cloud providers will allocate
#! a public IP address by default even on private clusters.
#! ClusterIP automatically provisions a Service of type ClusterIP pointing at the
#! impersonation proxy.
#! None does not provision either and assumes that you have set the external_endpoint
#! and set up your own ingress to connect to the impersonation proxy.
#@schema/desc 'Options are "LoadBalancer", "ClusterIP" and "None".'
type: LoadBalancer
#@schema/desc "The annotations that should be set on the ClusterIP or LoadBalancer Service."
annotations:
{service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"}
#@schema/desc "When mode LoadBalancer is set, this will set the LoadBalancer Service's Spec.LoadBalancerIP."
load_balancer_ip: 1.2.3.4
#! Optional.
#@schema/desc "Set the standard golang HTTPS_PROXY and NO_PROXY environment variables on the Concierge containers. These will be used when the Concierge makes backend-to-backend calls to authenticators using HTTPS, e.g. when the Concierge fetches discovery documents, JWKS keys, and POSTs to token webhooks. The Concierge never makes insecure HTTP calls, so there is no reason to set HTTP_PROXY."
#@schema/nullable
https_proxy: http://proxy.example.com
#@schema/desc "NO_PROXY environment variable. do not proxy Kubernetes endpoints"
no_proxy: "$(KUBERNETES_SERVICE_HOST),169.254.169.254,127.0.0.1,localhost,.svc,.cluster.local"

View File

@ -0,0 +1,33 @@
#! Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:overlay", "overlay")
#@ load("helpers.lib.yaml", "labels", "pinnipedDevAPIGroupWithPrefix")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"credentialissuers.config.concierge.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("credentialissuers.config.concierge")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("config.concierge")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"webhookauthenticators.authentication.concierge.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("webhookauthenticators.authentication.concierge")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("authentication.concierge")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"jwtauthenticators.authentication.concierge.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("jwtauthenticators.authentication.concierge")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("authentication.concierge")

View File

@ -0,0 +1,13 @@
#@ load("@ytt:data", "data") # for reading data values (generated via ytt's data-values-schema-inspect mode).
#@ load("@ytt:yaml", "yaml") # for dynamically decoding the output of ytt's data-values-schema-inspect
---
apiVersion: data.packaging.carvel.dev/v1alpha1
kind: PackageMetadata
metadata:
name: concierge.pinniped.dev
spec:
displayName: "Pinniped Concierge"
longDescription: "Pinniped concierge enables consistent login across Kubernetes clusters on public cloud providers such as AKS, EKS and GKE"
shortDescription: "Pinniped concierge enables consistent login across public clouds"
categories:
- auth

View File

@ -0,0 +1,30 @@
#@ load("@ytt:data", "data") # for reading data values (generated via ytt's data-values-schema-inspect mode).
#@ load("@ytt:yaml", "yaml") # for dynamically decoding the output of ytt's data-values-schema-inspect
---
apiVersion: data.packaging.carvel.dev/v1alpha1
kind: Package
metadata:
name: #@ "concierge.pinniped.dev." + data.values.package_version
spec:
refName: concierge.pinniped.dev
version: #@ data.values.package_version
releaseNotes: |
Initial release of the pinniped concierge package, TODO: AUTOMATE THIS??
valuesSchema:
openAPIv3: #@ yaml.decode(data.values.openapi)["components"]["schemas"]["dataValues"]
template:
spec:
fetch:
- imgpkgBundle:
#! image: #@ data.values.package_image_repo + "/packages/:" + data.values.package_version
image: #@ data.values.package_image_repo
template:
- ytt:
paths:
- "config/"
- kbld:
paths:
- ".imgpkg/images.yml"
- "-"
deploy:
- kapp: {}

View File

@ -0,0 +1,164 @@
openapi: 3.0.0
info:
version: 0.1.0
title: Schema for data values, generated by ytt
paths: {}
components:
schemas:
dataValues:
type: object
additionalProperties: false
properties:
app_name:
type: string
description: Namespace of pinniped-concierge
default: pinniped-concierge
namespace:
type: string
description: Creates a new namespace statically in yaml with the given name and installs the app into that namespace.
default: pinniped-concierge
into_namespace:
type: string
nullable: true
description: 'Overrides namespace. This is actually confusingly worded. TODO: CAN WE REWRITE THIS ONE???'
default: null
custom_labels:
type: object
additionalProperties: false
description: 'All resources created statically by yaml at install-time and all resources created dynamically by controllers at runtime will be labelled with `app: $app_name` and also with the labels specified here.'
properties: {}
replicas:
type: integer
default: 2
image_repo:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: projects.registry.vmware.com/pinniped/pinniped-server
image_digest:
type: string
nullable: true
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: null
image_tag:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: latest
package_image_repo:
type: string
nullable: true
default: null
package_image_digest:
type: string
nullable: true
default: null
package_image_tag:
type: string
nullable: true
default: null
package_version:
type: string
nullable: true
default: null
kube_cert_agent_image:
type: string
description: Optionally specify a different image for the "kube-cert-agent" pod which is scheduled on the control plane. This image needs only to include `sleep` and `cat` binaries. By default, the same image specified for image_repo/image_digest/image_tag will be re-used.
default: projects.registry.vmware.com/pinniped/pinniped-server
image_pull_dockerconfigjson:
type: object
additionalProperties: false
nullable: true
description: Specifies a secret to be used when pulling the above `image_repo` container image. Can be used when the image_repo is a private registry.
properties:
auths:
type: object
additionalProperties: false
properties:
https://registry.example.com:
type: object
additionalProperties: false
properties:
username:
type: string
default: USERNAME
password:
type: string
default: PASSWORD
auth:
type: string
default: BASE64_ENCODED_USERNAME_COLON_PASSWORD
discovery_url:
type: string
nullable: true
description: Pinniped will try to guess the right K8s API URL for sharing that information with potential clients. This setting allows the guess to be overridden.
default: null
api_serving_certificate_duration_seconds:
type: integer
description: Specify the duration and renewal interval for the API serving certificate. The defaults are set to expire the cert about every 30 days, and to rotate it about every 25 days.
default: 2592000
api_serving_certificate_renew_before_seconds:
type: integer
default: 2160000
log_level:
type: string
nullable: true
description: 'Specify the verbosity of logging: info ("nice to know" information), debug (developer information), trace (timing information), or all (kitchen sink). Do not use trace or all on production systems, as credentials may get logged.'
default: null
deprecated_log_format:
type: string
nullable: true
description: 'Specify the format of logging: json (for machine parsable logs) and text (for legacy klog formatted logs). By default, when this value is left unset, logs are formatted in json. This configuration is deprecated and will be removed in a future release at which point logs will always be formatted as json.'
default: null
run_as_user:
type: integer
description: run_as_user specifies the user ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
run_as_group:
type: integer
description: run_as_group specifies the group ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
api_group_suffix:
type: string
description: Specify the API group suffix for all Pinniped API groups. By default, this is set to pinniped.dev, so Pinniped API groups will look like foo.pinniped.dev, authentication.concierge.pinniped.dev, etc. As an example, if this is set to tuna.io, then Pinniped API groups will look like foo.tuna.io. authentication.concierge.tuna.io, etc.
default: pinniped.dev
impersonation_proxy_spec:
type: object
additionalProperties: false
description: Customize CredentialIssuer.spec.impersonationProxy to change how the concierge handles impersonation.
properties:
mode:
type: string
description: options are "auto", "disabled" or "enabled".
default: auto
external_endpoint:
type: string
description: The endpoint which the client should use to connect to the impersonation proxy.
default: http://example.com
service:
type: object
additionalProperties: false
properties:
type:
type: string
description: Options are "LoadBalancer", "ClusterIP" and "None".
default: LoadBalancer
annotations:
type: object
additionalProperties: false
description: The annotations that should be set on the ClusterIP or LoadBalancer Service.
properties:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:
type: string
default: "4000"
load_balancer_ip:
type: string
description: When mode LoadBalancer is set, this will set the LoadBalancer Service's Spec.LoadBalancerIP.
default: 1.2.3.4
https_proxy:
type: string
nullable: true
description: Set the standard golang HTTPS_PROXY and NO_PROXY environment variables on the Concierge containers. These will be used when the Concierge makes backend-to-backend calls to authenticators using HTTPS, e.g. when the Concierge fetches discovery documents, JWKS keys, and POSTs to token webhooks. The Concierge never makes insecure HTTP calls, so there is no reason to set HTTP_PROXY.
default: null
no_proxy:
type: string
description: NO_PROXY environment variable. do not proxy Kubernetes endpoints
default: $(KUBERNETES_SERVICE_HOST),169.254.169.254,127.0.0.1,localhost,.svc,.cluster.local

62
deploy_carvel/delete.sh Executable file
View File

@ -0,0 +1,62 @@
#!/usr/bin/env bash
# https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
set -e # immediately exit
set -u # error if variables undefined
set -o pipefail # prevent masking errors in a pipeline
# set -x # print all executed commands to terminal
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
DEFAULT='\033[0m'
echo_yellow() {
echo -e "${YELLOW}>> $@${DEFAULT}\n"
# printf "${GREEN}$@${DEFAULT}"
}
echo_green() {
echo -e "${GREEN}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}
echo_red() {
echo -e "${RED}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}
echo_blue() {
echo -e "${BLUE}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}
echo_yellow "creating fresh kind cluster - my-pinniped-package-repo-cluster"
kind delete cluster --name my-pinniped-package-repo-cluster
kind create cluster --name my-pinniped-package-repo-cluster
#
#
#echo_yellow "check before install..."
#echo_yellow "kubectl get pkgr -A && kubectl get pkg -A && kubectl get pkgi -A"
#kubectl get pkgr -A && kubectl get pkg -A && kubectl get pkgi -A
## NAMESPACE NAME AGE DESCRIPTION
## default pinniped-package-repository 18h Reconcile succeeded
## NAMESPACE NAME PACKAGEMETADATA NAME VERSION AGE
## default concierge.pinniped.dev.0.25.0 concierge.pinniped.dev 0.25.0 18h16m28s
## default supervisor.pinniped.dev.0.25.0 supervisor.pinniped.dev 0.25.0 18h16m28s
## NAMESPACE NAME PACKAGE NAME PACKAGE VERSION DESCRIPTION AGE
## default supervisor-package-install supervisor.pinniped.dev 0.25.0 Delete failed: Error (see .status.usefulErrorMessage for details) 92m
## supervisor-ns supervisor-package-install supervisor.pinniped.dev Reconcile failed: Package supervisor.pinniped.dev not found 18h
#
#
#kubectl delete pkgr pinniped-package-repository
## should be automatic
## kubectl delete pkg concierge.pinniped.dev.0.25.0
## kubectl delete pkg supervisor.pinniped.dev.0.25.0
#kubectl delete pkgi supervisor-package-install
#kubectl delete pkgi concierge-package-install
#
#echo_yellow "check after install..."
#echo_yellow "kubectl get pkgr -A && kubectl get pkg -A && kubectl get pkgi -A"
#kubectl get pkgr -A && kubectl get pkg -A && kubectl get pkgi -A

402
deploy_carvel/deploy-packages.sh Executable file
View File

@ -0,0 +1,402 @@
#!/usr/bin/env bash
# Copyright 2020-2023 the Pinniped contributors. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
#
# This script can be used to prepare a kind cluster and deploy the app.
# You can call this script again to redeploy the app.
# It will also output instructions on how to run the integration.
#
set -e # immediately exit
set -u # error if variables undefined
set -o pipefail # prevent masking errors in a pipeline
# set -x # print all executed commands to terminal
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
# TODO: add support for
# Read the env vars output by hack/prepare-for-integration-tests.sh
# source /tmp/integration-test-env
#
#
# Deploy the PackageRepository and Package resources
# Requires a running kind cluster
# Does not configure Pinniped
#
app="${1:-undefined}"
tag="${2:-$(uuidgen)}" # always a new tag to force K8s to reload the image on redeploy
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
log_note "log-args.sh >>> script dir: ${SCRIPT_DIR} 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄"
log_note "log-args.sh >>> app: ${app} tag: ${tag} 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄"
# from prepare-for-integration-tests.sh
api_group_suffix="pinniped.dev" # same default as in the values.yaml ytt file
registry="pinniped.local"
repo="test/build"
registry_repo="$registry/$repo"
log_note "Deploying kapp-controller on kind cluster..."
kapp deploy --app kapp-controller --file https://github.com/vmware-tanzu/carvel-kapp-controller/releases/latest/download/release.yml -y
kubectl get customresourcedefinitions
# Global kapp-controller-namespace:
# -packaging-global-namespace=kapp-controller-packaging-global
# kapp-controller resources like PackageRepository and Package are namepaced.
# However, this namespace, provided via flag to kapp-controller in the yaml above,
# defines a "global" namespace. That is, resources installed in this namespace
# can be installed in every namespace as kapp will always pay attention to its
# pseudo-global namespace.
KAPP_CONTROLLER_GLOBAL_NAMESPACE="kapp-controller-packaging-global"
# deploy the Carvel packages for Pinniped & Supervisor.
# - Deploy the PackageRepository
# - Create PackageInstalls for Supervisor, Concierge
# - deploy these
# - then after, run hack/prepare-supervisor-on-kind.sh
# - ideally this configures the Supervisor
# need a directory for our yamls for deployment
log_note "Clean previous PackageInstalls in order to create new ones..."
PACKAGE_INSTALL_DIR="temp_actual_deploy_resources"
rm -rf "${SCRIPT_DIR}/${PACKAGE_INSTALL_DIR}"
mkdir "${SCRIPT_DIR}/${PACKAGE_INSTALL_DIR}"
# this is built via the build.sh script
# build.sh must be run first.
# TODO: since the ytt values.yaml takes in a version="x.y.z"
# for Pinniped, our packages are currently not meaningfully versioned.
# this is one of the questions we must answer, do we deviate in the
# "./deploy_carvel" directory by hard-coding this version in the packages?
log_note "Deploying Pinniped PackageRepository on kind cluster..."
PINNIPED_PACKAGE_VERSION="0.25.0"
PINNIPED_PACKGE_REPOSITORY_NAME="pinniped-package-repository"
PINNIPED_PACKGE_REPOSITORY_FILE_NAME="packagerepository.${PINNIPED_PACKAGE_VERSION}.yml"
PINNIPED_PACKGE_REPOSITORY_FILE_PATH="${SCRIPT_DIR}/${PINNIPED_PACKGE_REPOSITORY_FILE_NAME}"
# Now, gotta make this work. It'll be interesting if we can...
kapp deploy \
--namespace "${KAPP_CONTROLLER_GLOBAL_NAMESPACE}" \
--app "${PINNIPED_PACKGE_REPOSITORY_NAME}" \
--file "${PINNIPED_PACKGE_REPOSITORY_FILE_PATH}" -y
kapp inspect \
--namespace "${KAPP_CONTROLLER_GLOBAL_NAMESPACE}" \
--app "${PINNIPED_PACKGE_REPOSITORY_NAME}" \
--tree
# TODO: obviously a mega-role that can do everything is not good. we need to scope this down to appropriate things.
declare -a arr=("supervisor" "concierge")
for resource_name in "${arr[@]}"
do
log_note "Generating RBAC for use with ${resource_name} PackageInstall..."
# we want the install-ns to not be "default"
# it should be a unique namespace
# but it should also not be in kapp-controllers global namespace
# nor should it be in any Pinniped resource namespace
# - PackageRepository,Package = global kapp-controller namespace
# - PackageInstall,RBAC = *-install namespace
# - App = (supervisor, concierge) generated via ytt namespace
INSTALL_NAMESPACE="${resource_name}-install-ns"
PINNIPED_PACKAGE_RBAC_PREFIX="pinniped-package-rbac-${resource_name}"
PINNIPED_PACKAGE_RBAC_FILE_NAME="${PINNIPED_PACKAGE_RBAC_PREFIX}-${resource_name}-rbac.yml"
PINNIPED_PACKAGE_RBAC_FILE_PATH="${SCRIPT_DIR}/${PACKAGE_INSTALL_DIR}/${PINNIPED_PACKAGE_RBAC_FILE_NAME}"
# empty and regenerate
echo -n "" > "${PINNIPED_PACKAGE_RBAC_FILE_PATH}"
cat <<EOF >> "${PINNIPED_PACKAGE_RBAC_FILE_PATH}"
---
apiVersion: v1
kind: Namespace
metadata:
name: "${INSTALL_NAMESPACE}"
---
# ServiceAccount details from the file linked above
apiVersion: v1
kind: ServiceAccount
metadata:
name: "${PINNIPED_PACKAGE_RBAC_PREFIX}-sa-superadmin-dangerous"
namespace: "${INSTALL_NAMESPACE}"
# namespace: default # --> sticking to default for everything for now.
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "${PINNIPED_PACKAGE_RBAC_PREFIX}-role-superadmin-dangerous"
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "${PINNIPED_PACKAGE_RBAC_PREFIX}-role-binding-superadmin-dangerous"
subjects:
- kind: ServiceAccount
name: "${PINNIPED_PACKAGE_RBAC_PREFIX}-sa-superadmin-dangerous"
namespace: "${INSTALL_NAMESPACE}"
# namespace: default # --> sticking to default for everything for now.
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "${PINNIPED_PACKAGE_RBAC_PREFIX}-role-superadmin-dangerous"
EOF
kapp deploy --app "${PINNIPED_PACKAGE_RBAC_PREFIX}" --file "${PINNIPED_PACKAGE_RBAC_FILE_PATH}" -y
done
# IF SUPERVISOR ........
if [ "${app}" = "pinniped-supervisor" ]; then
resource_name="supervisor"
# matching the hack/prepare-for-integration-tests.sh variables
supervisor_app_name="pinniped-supervisor"
supervisor_namespace="supervisor"
supervisor_custom_labels="{mySupervisorCustomLabelName: mySupervisorCustomLabelValue}"
log_level="debug"
service_https_nodeport_port="443"
service_https_nodeport_nodeport="31243"
service_https_clusterip_port="443"
# package install variables
INSTALL_NAME="${resource_name}-install"
INSTALL_NAMESPACE="${INSTALL_NAME}-ns"
PINNIPED_PACKAGE_RBAC_PREFIX="pinniped-package-rbac-${resource_name}"
RESOURCE_PACKGE_VERSION="${resource_name}.pinniped.dev"
PACKAGE_INSTALL_FILE_NAME="./${PACKAGE_INSTALL_DIR}/${resource_name}-pkginstall.yml"
PACKAGE_INSTALL_FILE_PATH="${SCRIPT_DIR}/${PACKAGE_INSTALL_FILE_NAME}"
SECRET_NAME="${resource_name}-package-install-secret"
log_note "🦄${resource_name}🦄: Creating PackageInstall resources for ${resource_name}..."
# generate an install file to use
cat > "${PACKAGE_INSTALL_FILE_PATH}" << EOF
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
# name, does not have to be versioned, versionSelection.constraints below will handle
name: ${INSTALL_NAME}
namespace: ${INSTALL_NAMESPACE}
spec:
serviceAccountName: "${PINNIPED_PACKAGE_RBAC_PREFIX}-sa-superadmin-dangerous"
packageRef:
refName: "${RESOURCE_PACKGE_VERSION}"
versionSelection:
constraints: "${PINNIPED_PACKAGE_VERSION}"
values:
- secretRef:
name: "${SECRET_NAME}"
---
apiVersion: v1
kind: Secret
metadata:
name: "${SECRET_NAME}"
namespace: ${INSTALL_NAMESPACE}
stringData:
values.yml: |
---
app_name: $supervisor_app_name
namespace: $supervisor_namespace
api_group_suffix: $api_group_suffix
image_repo: $registry_repo
image_tag: $tag
log_level: $log_level
service_https_nodeport_port: $service_https_nodeport_port
service_https_nodeport_nodeport: $service_https_nodeport_nodeport
service_https_clusterip_port: $service_https_clusterip_port
EOF
# removed from above:
# custom_labels: $supervisor_custom_labels
KAPP_CONTROLLER_APP_NAME="${resource_name}-pkginstall"
log_note "🦄${resource_name}🦄: Deploying ${KAPP_CONTROLLER_APP_NAME}..."
kapp deploy --yes --app "$supervisor_app_name" --diff-changes --file "${PACKAGE_INSTALL_FILE_PATH}"
kubectl apply --dry-run=client -f "${PACKAGE_INSTALL_FILE_PATH}" # Validate manifest schema.
fi
# IF CONCIERGE ........
if [ "${app}" = "pinniped-concierge" ]; then
resource_name="concierge"
# matching the hack/prepare-for-integration-tests.sh variables
concierge_app_name="pinniped-concierge"
concierge_namespace="concierge"
webhook_url="https://local-user-authenticator.local-user-authenticator.svc/authenticate"
webhook_ca_bundle="$(kubectl get secret local-user-authenticator-tls-serving-certificate --namespace local-user-authenticator -o 'jsonpath={.data.caCertificate}')"
discovery_url="$(TERM=dumb kubectl cluster-info | awk '/master|control plane/ {print $NF}')"
concierge_custom_labels="{myConciergeCustomLabelName: myConciergeCustomLabelValue}"
log_level="debug"
# package install variables
RESOURCE_NAMESPACE="${resource_name}" # to match the hack/prepare-for-integration-tests.sh file
INSTALL_NAME="${resource_name}-install"
INSTALL_NAMESPACE="${INSTALL_NAME}-ns"
PINNIPED_PACKAGE_RBAC_PREFIX="pinniped-package-rbac-${resource_name}"
RESOURCE_PACKGE_VERSION="${resource_name}.pinniped.dev"
PACKAGE_INSTALL_FILE_NAME="./${PACKAGE_INSTALL_DIR}/${resource_name}-pkginstall.yml"
PACKAGE_INSTALL_FILE_PATH="${SCRIPT_DIR}/${PACKAGE_INSTALL_FILE_NAME}"
SECRET_NAME="${resource_name}-package-install-secret"
log_note "🦄${resource_name}🦄: Creating PackageInstall resources for ${resource_name}..." # concierge version
cat > "${PACKAGE_INSTALL_FILE_PATH}" << EOF
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
# name, does not have to be versioned, versionSelection.constraints below will handle
name: ${INSTALL_NAME}
namespace: ${INSTALL_NAMESPACE}
spec:
serviceAccountName: "${PINNIPED_PACKAGE_RBAC_PREFIX}-sa-superadmin-dangerous"
packageRef:
refName: "${RESOURCE_PACKGE_VERSION}"
versionSelection:
constraints: "${PINNIPED_PACKAGE_VERSION}"
values:
- secretRef:
name: "${SECRET_NAME}"
---
apiVersion: v1
kind: Secret
metadata:
name: "${SECRET_NAME}"
namespace: ${INSTALL_NAMESPACE}
stringData:
values.yml: |
---
app_name: $concierge_app_name
namespace: $concierge_namespace
api_group_suffix: $api_group_suffix
log_level: $log_level
custom_labels: $concierge_custom_labels
image_repo: $registry_repo
image_tag: $tag
discovery_url: $discovery_url
EOF
KAPP_CONTROLLER_APP_NAME="${resource_name}-pkginstall"
log_note "🦄${resource_name}🦄: Deploying ${KAPP_CONTROLLER_APP_NAME}..."
# kapp deploy --app "${KAPP_CONTROLLER_APP_NAME}" --file "${PACKAGE_INSTALL_FILE_PATH}" -y
kapp deploy --yes --app "$concierge_app_name" --diff-changes --file "${PACKAGE_INSTALL_FILE_PATH}"
kubectl apply --dry-run=client -f "${PACKAGE_INSTALL_FILE_PATH}" # Validate manifest schema.
fi
log_note "Available Packages:"
kubectl get pkgr -A && kubectl get pkg -A && kubectl get pkgi -A
log_note "Pinniped Supervisor Package Deployed"
log_note "Pinniped Concierge Package Deployed"
kubectl get namespace -A | grep pinniped
kubectl get deploy -n supervisor
kubectl get deploy -n concierge
# FLOW:
# kind delete cluster --name pinniped
# ./hack/prepare-for-integration-tests.sh --alternate-deploy-supervisor $(pwd)/deploy_carvel/deploy-packges.sh --alternate-deploy-concierge $(pwd)/deploy_carvel/deploy-packges.sh
# ./hack/prepare-supervisor-on-kind.sh --oidc
#
# TODO:
# - change the namespace to whatever it is in ./hack/prepare-for-integration-tests.sh
# - make a script that can work for $alternate-deploy
# - then run ./hack/prepare-supervisor-on-kind.sh and make sure it works
#
#
# openssl x509 -text -noout -in ./root_ca.crt
#curl --insecure https://127.0.0.1:61759/live
#{
# "kind": "Status",
# "apiVersion": "v1",
# "metadata": {},
# "status": "Failure",
# "message": "forbidden: User \"system:anonymous\" cannot get path \"/live\"",
# "reason": "Forbidden",
# "details": {},
# "code": 403
#}%
#curl --insecure https://127.0.0.1:61759/readyz
#ok%
#
#
#log_note "verifying PackageInstall resources..."
#kubectl get PackageInstall -A | grep pinniped
#kubectl get secret -A | grep pinniped
#
#log_note "listing all package resources (PackageRepository, Package, PackageInstall)..."
#kubectl get pkgi && kubectl get pkgr && kubectl get pkg
#
#log_note "listing all kapp cli apps..."
## list again what is installed so we can ensure we have everything
#kapp ls --all-namespaces
#
## these are fundamentally different than what kapp cli understands, unfortunately.
## the term "app" is overloaded in Carvel and can mean two different things, based on
## the use of kapp cli and kapp-controller on cluster
#log_note "listing all kapp-controller apps..."
#kubectl get app --all-namespaces
#
## TODO:
## update the deployment.yaml and remove the deployment-HACKED.yaml files
## both are probably hacked a bit, so delete them and just get fresh from the ./deploy directory
## then make sure REAL PINNIPED actually deploys.
#
#
## In the end we should have:
## docker pull benjaminapetersen/pinniped-package-repo:latest
## docker pull benjaminapetersen/pinniped-package-repo-package-supervisor:0.25.0
## docker pull benjaminapetersen/pinniped-package-repo-package-concierge:0.25.0
#
## log_note "verifying RBAC resources created (namespace, serviceaccount, clusterrole, clusterrolebinding)..."
## kubectl get ns -A | grep pinniped
## kubectl get sa -A | grep pinniped
## kubectl get ClusterRole -A | grep pinniped
## kubectl get clusterrolebinding -A | grep pinniped
#
#
## stuff
#kubectl get PackageRepository -A
#kubectl get Package -A
#kubectl get PackageInstall -A

View File

@ -0,0 +1,32 @@
#!/usr/bin/env bash
# https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
set -e # immediately exit
set -u # error if variables undefined
set -o pipefail # prevent masking errors in a pipeline
# set -x # print all executed commands to terminal
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
DEFAULT='\033[0m'
echo_yellow() {
echo -e "${YELLOW}>> $@${DEFAULT}\n"
# printf "${GREEN}$@${DEFAULT}"
}
echo_green() {
echo -e "${GREEN}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}
echo_red() {
echo -e "${RED}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}
echo_blue() {
echo -e "${BLUE}>> $@${DEFAULT}\n"
# printf "${BLUE}$@${DEFAULT}"
}

View File

@ -0,0 +1,85 @@
#!/usr/bin/env bash
#
# This script is intended to be used with:
# - $repo_root/hack/prepare-for-integration-test.sh --alternate-deploy $(pwd)/deploy_carvel/hack/log-args.sh
# and originated with the following:
# - https://github.com/jvanzyl/pinniped-charts/blob/main/alternate-deploy-helm
# along with this PR to pinniped:
# - https://github.com/vmware-tanzu/pinniped/pull/1028
set -euo pipefail
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
log_note "log-args.sh 🐳 🐳 🐳"
# two vars will be received by this script:
# Received: local-user-authenticator
# Received: D00A4537-80F1-4AF2-A3B3-5F20BDBB9AEB
log_note "passed this invocation:"
app=${1}
# tag is fed in from the prepare-for-integration-tests.sh script, just uuidgen to identify a
# specific docker build of the pinniped-server image.
tag=${2}
registry="pinniped.local"
repo="test/build"
registry_repo="$registry/$repo"
if [ "${app}" = "local-user-authenticator" ]; then
log_note "deploy-pachage.sh: local-user-authenticator 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠"
log_note "deploy-pachage.sh: local-user-authenticator 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠"
log_note "deploy-pachage.sh: local-user-authenticator 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠 🐠"
pushd deploy/local-user-authenticator >/dev/null
manifest=/tmp/pinniped-local-user-authenticator.yaml
ytt --file . \
--data-value "image_repo=$registry_repo" \
--data-value "image_tag=$tag" >"$manifest"
kapp deploy --yes --app local-user-authenticator --diff-changes --file "$manifest"
kubectl apply --dry-run=client -f "$manifest" # Validate manifest schema.
popd >/dev/null
fi
if [ "${app}" = "pinniped-supervisor" ]; then
log_note "deploy-pachage.sh: supervisor 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡"
log_note "deploy-pachage.sh: supervisor 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡"
log_note "deploy-pachage.sh: supervisor 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡 🐡"
fi
if [ "${app}" = "pinniped-concierge" ]; then
log_note "deploy-pachage.sh: concierge 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼"
log_note "deploy-pachage.sh: concierge 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼"
log_note "deploy-pachage.sh: concierge 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼 🪼"
fi

View File

@ -0,0 +1,69 @@
#!/bin/sh
set -o errexit
# default name if not provided
# KIND_CLUSTER_NAME=my-kind-of-cluster ./kind-with-registry.sh
KIND_CLUSTER_NAME="${KIND_CLUSTER_NAME:=my-kind-cluster}"
# 1. Create registry container unless it already exists
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
registry:2
fi
# 2. Create kind cluster with containerd registry config dir enabled
# TODO: kind will eventually enable this by default and this patch will
# be unnecessary.
#
# See:
# https://github.com/kubernetes-sigs/kind/issues/2875
# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
# See: https://github.com/containerd/containerd/blob/main/docs/hosts.md
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ${KIND_CLUSTER_NAME}
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
EOF
# 3. Add the registry config to the nodes
#
# This is necessary because localhost resolves to loopback addresses that are
# network-namespace local.
# In other words: localhost in the container is not localhost on the host.
#
# We want a consistent name that works from both ends, so we tell containerd to
# alias localhost:${reg_port} to the registry container when pulling images
REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}"
for node in $(kind get nodes); do
docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
[host."http://${reg_name}:5000"]
EOF
done
# 4. Connect the registry to the cluster network if not already connected
# This allows kind to bootstrap the network but ensures they're on the same network
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
docker network connect "kind" "${reg_name}"
fi
# 5. Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF

55
deploy_carvel/hack/log-args.sh Executable file
View File

@ -0,0 +1,55 @@
#!/usr/bin/env bash
#
# This script is intended to be used with:
# - $repo_root/hack/prepare-for-integration-test.sh --alternate-deploy $(pwd)/deploy_carvel/hack/log-args.sh
# and originated with the following:
# - https://github.com/jvanzyl/pinniped-charts/blob/main/alternate-deploy-helm
# along with this PR to pinniped:
# - https://github.com/vmware-tanzu/pinniped/pull/1028
set -euo pipefail
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
# two vars will be received by this script:
# Received: local-user-authenticator
# Received: D00A4537-80F1-4AF2-A3B3-5F20BDBB9AEB
app=${1}
# tag is fed in from the prepare-for-integration-tests.sh script, just uuidgen to identify a
# specific docker build of the pinniped-server image.
tag=${2}
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
log_note "log-args.sh >>> script dir: ${SCRIPT_DIR}"
log_note "log-args.sh >>> app: ${app} tag: ${tag}"
# nothing else, this is a test.

53
deploy_carvel/hack/log-args2.sh Executable file
View File

@ -0,0 +1,53 @@
#!/usr/bin/env bash
#
# This script is intended to be used with:
# - $repo_root/hack/prepare-for-integration-test.sh --alternate-deploy $(pwd)/deploy_carvel/hack/log-args.sh
# and originated with the following:
# - https://github.com/jvanzyl/pinniped-charts/blob/main/alternate-deploy-helm
# along with this PR to pinniped:
# - https://github.com/vmware-tanzu/pinniped/pull/1028
set -euo pipefail
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
# two vars will be received by this script:
# Received: local-user-authenticator
# Received: D00A4537-80F1-4AF2-A3B3-5F20BDBB9AEB
app=${1}
# tag is fed in from the prepare-for-integration-tests.sh script, just uuidgen to identify a
# specific docker build of the pinniped-server image.
tag=${2}
log_note "🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄"
log_note "log-args2.sh >>> app: ${app} tag: ${tag} 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄"
log_note "🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄 🦄"

View File

@ -0,0 +1,99 @@
#!/usr/bin/env bash
#
# This script is intended to be used with:
# - $repo_root/hack/prepare-for-integration-test.sh --alternate-deploy deploy_carvel/prepare-alt-deploy-with-package.sh
# and originated with the following:
# - https://github.com/jvanzyl/pinniped-charts/blob/main/alternate-deploy-helm
# along with this PR to pinniped:
# - https://github.com/vmware-tanzu/pinniped/pull/1028
set -euo pipefail
#
# Helper functions
#
function log_note() {
GREEN='\033[0;32m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "${GREEN}$*${NC}"
else
echo "$*"
fi
}
function log_error() {
RED='\033[0;31m'
NC='\033[0m'
if [[ ${COLORTERM:-unknown} =~ ^(truecolor|24bit)$ ]]; then
echo -e "🙁${RED} Error: $* ${NC}"
else
echo ":( Error: $*"
fi
}
function check_dependency() {
if ! command -v "$1" >/dev/null; then
log_error "Missing dependency..."
log_error "$2"
exit 1
fi
}
# two vars will be received by this script:
# Received: local-user-authenticator
# Received: D00A4537-80F1-4AF2-A3B3-5F20BDBB9AEB
log_note "passed this invocation:"
app=${1}
# tag is fed in from the prepare-for-integration-tests.sh script, just uuidgen to identify a
# specific docker build of the pinniped-server image.
tag=${2}
if [ "${app}" = "local-user-authenticator" ]; then
#
# TODO: continue on from here.
# get this to install correctly, exaclty as it did before
# and then do the rest?
# OR TODO: correct the $alternate_deploy issue by creating 3 new flags:
# $alternate_deploy-supervisor
# $alternate_deploy-concierge
# $alternate_deploy-local-user-authenticator
#
# TODO step 1: test to ensure current change did not break the script!
#
log_note "🦄 🦄 🦄 where are we?!?!?"
pwd
log_note "Deploying the local-user-authenticator app to the cluster using kapp..."
ytt --file . \
--data-value "image_repo=$registry_repo" \
--data-value "image_tag=$tag" >"$manifest"
kapp deploy --yes --app local-user-authenticator --diff-changes --file "$manifest"
kubectl apply --dry-run=client -f "$manifest" # Validate manifest schema.
fi
if [ "${app}" = "pinniped-supervisor" ]; then
# helm upgrade pinniped-supervisor charts/pinniped-supervisor \
# --install \
# --values source/pinniped-supervisor/values-lit.yaml \
# --set image.version=${tag} \
# --namespace supervisor \
# --create-namespace \
# --atomic
# --atomic
log_note "ignoring supervisor, so sad........."
fi
if [ "${app}" = "pinniped-concierge" ]; then
# discovery_url="$(TERM=dumb kubectl cluster-info | awk '/master|control plane/ {print $NF}')"
# helm upgrade pinniped-concierge charts/pinniped-concierge \
# --install \
# --values source/pinniped-concierge/values-lit.yaml \
# --set image.version=${tag} \
# --set config.discovery.url=${discovery_url} \
# --set config.logLevel="debug" \
# --namespace concierge \
# --create-namespace \
# --atomic
log_note "ignoring concierge, so sad........."
fi

View File

@ -0,0 +1,9 @@
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageRepository
metadata:
name: "pinniped-package-repository"
spec:
fetch:
imgpkgBundle:
image: "benjaminapetersen/pinniped-package-repo:0.25.0"

View File

@ -0,0 +1,3 @@
---
apiVersion: imgpkg.carvel.dev/v1alpha1
kind: ImagesLock

View File

@ -0,0 +1,23 @@
#@ load("@ytt:data", "data")
---
apiVersion: kbld.k14s.io/v1alpha1
kind: Config
minimumRequiredVersion: 0.31.0 #! minimum version of kbld. We probably don't need to specify.
overrides:
#! TODO: in the pinniped yamls, this is provided by values.yaml, not declared in the deployment.
#! we should assess if we want to leave it there or move it to this form of configuration.
- image: projects.registry.vmware.com/pinniped/pinniped-server
newImage: #@ data.values.image_repo
#! I don't think we need any of these (until we need them 😊). IE, don't use prematurely.
#! searchRules: ... # for searching input files to find container images
#! overrides: ... # overrides to apply to container images before resolving or building
#! sources: ... # source/content of a container image
#! destinations: ... # where to push built images
#!
#!
#! source: TODO: we may need this at least to specify that we want kbld to build
#! a set of container images that are found in our package config yaml files.

View File

@ -0,0 +1,3 @@
# Pinniped Supervisor Deployment
See [the how-to guide for details](https://pinniped.dev/docs/howto/install-supervisor/).

View File

@ -0,0 +1,170 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: federationdomains.config.supervisor.pinniped.dev
spec:
group: config.supervisor.pinniped.dev
names:
categories:
- pinniped
kind: FederationDomain
listKind: FederationDomainList
plural: federationdomains
singular: federationdomain
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.issuer
name: Issuer
type: string
- jsonPath: .status.status
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: FederationDomain describes the configuration of an OIDC provider.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec of the OIDC provider.
properties:
issuer:
description: "Issuer is the OIDC Provider's issuer, per the OIDC Discovery
Metadata document, as well as the identifier that it will use for
the iss claim in issued JWTs. This field will also be used as the
base URL for any endpoints used by the OIDC Provider (e.g., if your
issuer is https://example.com/foo, then your authorization endpoint
will look like https://example.com/foo/some/path/to/auth/endpoint).
\n See https://openid.net/specs/openid-connect-discovery-1_0.html#rfc.section.3
for more information."
minLength: 1
type: string
tls:
description: TLS configures how this FederationDomain is served over
Transport Layer Security (TLS).
properties:
secretName:
description: "SecretName is an optional name of a Secret in the
same namespace, of type `kubernetes.io/tls`, which contains
the TLS serving certificate for the HTTPS endpoints served by
this FederationDomain. When provided, the TLS Secret named here
must contain keys named `tls.crt` and `tls.key` that contain
the certificate and private key to use for TLS. \n Server Name
Indication (SNI) is an extension to the Transport Layer Security
(TLS) supported by all major browsers. \n SecretName is required
if you would like to use different TLS certificates for issuers
of different hostnames. SNI requests do not include port numbers,
so all issuers with the same DNS hostname must use the same
SecretName value even if they have different port numbers. \n
SecretName is not required when you would like to use only the
HTTP endpoints (e.g. when the HTTP listener is configured to
listen on loopback interfaces or UNIX domain sockets for traffic
from a service mesh sidecar). It is also not required when you
would like all requests to this OIDC Provider's HTTPS endpoints
to use the default TLS certificate, which is configured elsewhere.
\n When your Issuer URL's host is an IP address, then this field
is ignored. SNI does not work for IP addresses."
type: string
type: object
required:
- issuer
type: object
status:
description: Status of the OIDC provider.
properties:
lastUpdateTime:
description: LastUpdateTime holds the time at which the Status was
last updated. It is a pointer to get around some undesirable behavior
with respect to the empty metav1.Time value (see https://github.com/kubernetes/kubernetes/issues/86811).
format: date-time
type: string
message:
description: Message provides human-readable details about the Status.
type: string
secrets:
description: Secrets contains information about this OIDC Provider's
secrets.
properties:
jwks:
description: JWKS holds the name of the corev1.Secret in which
this OIDC Provider's signing/verification keys are stored. If
it is empty, then the signing/verification keys are either unknown
or they don't exist.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
stateEncryptionKey:
description: StateSigningKey holds the name of the corev1.Secret
in which this OIDC Provider's key for encrypting state parameters
is stored.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
stateSigningKey:
description: StateSigningKey holds the name of the corev1.Secret
in which this OIDC Provider's key for signing state parameters
is stored.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
tokenSigningKey:
description: TokenSigningKey holds the name of the corev1.Secret
in which this OIDC Provider's key for signing tokens is stored.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
type: object
status:
description: Status holds an enum that describes the state of this
OIDC Provider. Note that this Status can represent success or failure.
enum:
- Success
- Duplicate
- Invalid
- SameIssuerHostMustUseSameSecret
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,221 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: oidcclients.config.supervisor.pinniped.dev
spec:
group: config.supervisor.pinniped.dev
names:
categories:
- pinniped
kind: OIDCClient
listKind: OIDCClientList
plural: oidcclients
singular: oidcclient
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.allowedScopes[?(@ == "pinniped:request-audience")]
name: Privileged Scopes
type: string
- jsonPath: .status.totalClientSecrets
name: Client Secrets
type: integer
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: OIDCClient describes the configuration of an OIDC client.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec of the OIDC client.
properties:
allowedGrantTypes:
description: "allowedGrantTypes is a list of the allowed grant_type
param values that should be accepted during OIDC flows with this
client. \n Must only contain the following values: - authorization_code:
allows the client to perform the authorization code grant flow,
i.e. allows the webapp to authenticate users. This grant must always
be listed. - refresh_token: allows the client to perform refresh
grants for the user to extend the user's session. This grant must
be listed if allowedScopes lists offline_access. - urn:ietf:params:oauth:grant-type:token-exchange:
allows the client to perform RFC8693 token exchange, which is a
step in the process to be able to get a cluster credential for the
user. This grant must be listed if allowedScopes lists pinniped:request-audience."
items:
enum:
- authorization_code
- refresh_token
- urn:ietf:params:oauth:grant-type:token-exchange
type: string
minItems: 1
type: array
x-kubernetes-list-type: set
allowedRedirectURIs:
description: allowedRedirectURIs is a list of the allowed redirect_uri
param values that should be accepted during OIDC flows with this
client. Any other uris will be rejected. Must be a URI with the
https scheme, unless the hostname is 127.0.0.1 or ::1 which may
use the http scheme. Port numbers are not required for 127.0.0.1
or ::1 and are ignored when checking for a matching redirect_uri.
items:
pattern: ^https://.+|^http://(127\.0\.0\.1|\[::1\])(:\d+)?/
type: string
minItems: 1
type: array
x-kubernetes-list-type: set
allowedScopes:
description: "allowedScopes is a list of the allowed scopes param
values that should be accepted during OIDC flows with this client.
\n Must only contain the following values: - openid: The client
is allowed to request ID tokens. ID tokens only include the required
claims by default (iss, sub, aud, exp, iat). This scope must always
be listed. - offline_access: The client is allowed to request an
initial refresh token during the authorization code grant flow.
This scope must be listed if allowedGrantTypes lists refresh_token.
- pinniped:request-audience: The client is allowed to request a
new audience value during a RFC8693 token exchange, which is a step
in the process to be able to get a cluster credential for the user.
openid, username and groups scopes must be listed when this scope
is present. This scope must be listed if allowedGrantTypes lists
urn:ietf:params:oauth:grant-type:token-exchange. - username: The
client is allowed to request that ID tokens contain the user's username.
Without the username scope being requested and allowed, the ID token
will not contain the user's username. - groups: The client is allowed
to request that ID tokens contain the user's group membership, if
their group membership is discoverable by the Supervisor. Without
the groups scope being requested and allowed, the ID token will
not contain groups."
items:
enum:
- openid
- offline_access
- username
- groups
- pinniped:request-audience
type: string
minItems: 1
type: array
x-kubernetes-list-type: set
required:
- allowedGrantTypes
- allowedRedirectURIs
- allowedScopes
type: object
status:
description: Status of the OIDC client.
properties:
conditions:
description: conditions represent the observations of an OIDCClient's
current state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
phase:
default: Pending
description: phase summarizes the overall status of the OIDCClient.
enum:
- Pending
- Ready
- Error
type: string
totalClientSecrets:
description: totalClientSecrets is the current number of client secrets
that are detected for this OIDCClient.
format: int32
type: integer
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,236 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("@ytt:yaml", "yaml")
#@ load("helpers.lib.yaml",
#@ "defaultLabel",
#@ "labels",
#@ "deploymentPodLabel",
#@ "namespace",
#@ "defaultResourceName",
#@ "defaultResourceNameWithSuffix",
#@ "pinnipedDevAPIGroupWithPrefix",
#@ "getPinnipedConfigMapData",
#@ "hasUnixNetworkEndpoint",
#@ )
#@ load("@ytt:template", "template")
#@ if not data.values.into_namespace:
---
apiVersion: v1
kind: Namespace
metadata:
name: #@ data.values.namespace
labels: #@ labels()
#@ end
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
---
apiVersion: v1
kind: ConfigMap
metadata:
name: #@ defaultResourceNameWithSuffix("static-config")
namespace: #@ namespace()
labels: #@ labels()
data:
#@yaml/text-templated-strings
pinniped.yaml: #@ yaml.encode(getPinnipedConfigMapData())
---
#@ if data.values.image_pull_dockerconfigjson and data.values.image_pull_dockerconfigjson != "":
apiVersion: v1
kind: Secret
metadata:
name: image-pull-secret
namespace: #@ namespace()
labels: #@ labels()
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: #@ data.values.image_pull_dockerconfigjson
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
spec:
replicas: #@ data.values.replicas
selector:
#! In hindsight, this should have been deploymentPodLabel(), but this field is immutable so changing it would break upgrades.
matchLabels: #@ defaultLabel()
template:
metadata:
labels:
#! This has always included defaultLabel(), which is used by this Deployment's selector.
_: #@ template.replace(defaultLabel())
#! More recently added the more unique deploymentPodLabel() so Services can select these Pods more specifically
#! without accidentally selecting pods from any future Deployments which might also want to use the defaultLabel().
_: #@ template.replace(deploymentPodLabel())
spec:
securityContext:
runAsUser: #@ data.values.run_as_user
runAsGroup: #@ data.values.run_as_group
serviceAccountName: #@ defaultResourceName()
#@ if data.values.image_pull_dockerconfigjson and data.values.image_pull_dockerconfigjson != "":
imagePullSecrets:
- name: image-pull-secret
#@ end
containers:
- name: #@ defaultResourceName()
#@ if data.values.image_digest:
image: #@ data.values.image_repo + "@" + data.values.image_digest
#@ else:
image: #@ data.values.image_repo + ":" + data.values.image_tag
#@ end
imagePullPolicy: IfNotPresent
command:
- pinniped-supervisor
- /etc/podinfo
- /etc/config/pinniped.yaml
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
#! seccompProfile was introduced in Kube v1.19. Using it on an older Kube version will result in a
#! kubectl validation error when installing via `kubectl apply`, which can be ignored using kubectl's
#! `--validate=false` flag. Note that installing via `kapp` does not complain about this validation error.
seccompProfile:
type: "RuntimeDefault"
resources:
requests:
#! If OIDCClient CRs are being used, then the Supervisor needs enough CPU to run expensive bcrypt
#! operations inside the implementation of the token endpoint for any authcode flows performed by those
#! clients, so for that use case administrators may wish to increase the requests.cpu value to more
#! closely align with their anticipated needs. Increasing this value will cause Kubernetes to give more
#! available CPU to this process during times of high CPU contention. By default, don't ask for too much
#! because that would make it impossible to install the Pinniped Supervisor on small clusters.
#! Aside from performing bcrypts at the token endpoint for those clients, the Supervisor is not a
#! particularly CPU-intensive process.
cpu: "100m" #! by default, request one-tenth of a CPU
memory: "128Mi"
limits:
#! By declaring a CPU limit that is not equal to the CPU request value, the Supervisor will be classified
#! by Kubernetes to have "burstable" quality of service.
#! See https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-burstable
#! If OIDCClient CRs are being used, and lots of simultaneous users have active sessions, then it is hard
#! pre-determine what the CPU limit should be for that use case. Guessing too low would cause the
#! pod's CPU usage to be throttled, resulting in poor performance. Guessing too high would allow clients
#! to cause the usage of lots of CPU resources. Administrators who have a good sense of anticipated usage
#! patterns may choose to set the requests.cpu and limits.cpu differently from these defaults.
cpu: "1000m" #! by default, throttle each pod's usage at 1 CPU
memory: "128Mi"
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
- name: podinfo
mountPath: /etc/podinfo
readOnly: true
#@ if hasUnixNetworkEndpoint():
- name: socket
mountPath: /pinniped_socket
readOnly: false #! writable to allow for socket use
#@ end
ports:
- containerPort: 8443
protocol: TCP
env:
#@ if data.values.https_proxy:
- name: HTTPS_PROXY
value: #@ data.values.https_proxy
#@ end
#@ if data.values.https_proxy and data.values.no_proxy:
- name: NO_PROXY
value: #@ data.values.no_proxy
#@ end
livenessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 2
timeoutSeconds: 15
periodSeconds: 10
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 2
timeoutSeconds: 3
periodSeconds: 10
failureThreshold: 3
volumes:
- name: config-volume
configMap:
name: #@ defaultResourceNameWithSuffix("static-config")
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "namespace"
fieldRef:
fieldPath: metadata.namespace
- path: "name"
fieldRef:
fieldPath: metadata.name
#@ if hasUnixNetworkEndpoint():
- name: socket
emptyDir: {}
#@ end
#! This will help make sure our multiple pods run on different nodes, making
#! our deployment "more" "HA".
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector:
matchLabels: #@ deploymentPodLabel()
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
#! If name is changed, must also change names.apiService in the ConfigMap above and spec.service.name in the APIService below.
name: #@ defaultResourceNameWithSuffix("api")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: ClusterIP
selector: #@ deploymentPodLabel()
ports:
- protocol: TCP
port: 443
targetPort: 10250
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: #@ pinnipedDevAPIGroupWithPrefix("v1alpha1.clientsecret.supervisor")
labels: #@ labels()
spec:
version: v1alpha1
group: #@ pinnipedDevAPIGroupWithPrefix("clientsecret.supervisor")
groupPriorityMinimum: 9900
versionPriority: 15
#! caBundle: Do not include this key here. Starts out null, will be updated/owned by the golang code.
service:
name: #@ defaultResourceNameWithSuffix("api")
namespace: #@ namespace()
port: 443

View File

@ -0,0 +1,88 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
#@ def defaultResourceName():
#@ return data.values.app_name
#@ end
#@ def defaultResourceNameWithSuffix(suffix):
#@ return data.values.app_name + "-" + suffix
#@ end
#@ def pinnipedDevAPIGroupWithPrefix(prefix):
#@ return prefix + "." + data.values.api_group_suffix
#@ end
#@ def namespace():
#@ if data.values.into_namespace:
#@ return data.values.into_namespace
#@ else:
#@ return data.values.namespace
#@ end
#@ end
#@ def defaultLabel():
app: #@ data.values.app_name
#@ end
#@ def deploymentPodLabel():
deployment.pinniped.dev: supervisor
#@ end
#@ def labels():
_: #@ template.replace(defaultLabel())
_: #@ template.replace(data.values.custom_labels)
#@ end
#@ def getAndValidateLogLevel():
#@ log_level = data.values.log_level
#@ if log_level != "info" and log_level != "debug" and log_level != "trace" and log_level != "all":
#@ fail("log_level '" + log_level + "' is invalid")
#@ end
#@ return log_level
#@ end
#@ def getPinnipedConfigMapData():
#@ config = {
#@ "apiGroupSuffix": data.values.api_group_suffix,
#@ "names": {
#@ "defaultTLSCertificateSecret": defaultResourceNameWithSuffix("default-tls-certificate"),
#@ "apiService": defaultResourceNameWithSuffix("api"),
#@ },
#@ "labels": labels(),
#@ "insecureAcceptExternalUnencryptedHttpRequests": data.values.deprecated_insecure_accept_external_unencrypted_http_requests
#@ }
#@ if data.values.log_level or data.values.deprecated_log_format:
#@ config["log"] = {}
#@ end
#@ if data.values.log_level:
#@ config["log"]["level"] = getAndValidateLogLevel()
#@ end
#@ if data.values.deprecated_log_format:
#@ config["log"]["format"] = data.values.deprecated_log_format
#@ end
#@ if data.values.endpoints:
#@ config["endpoints"] = data.values.endpoints
#@ end
#@ return config
#@ end
#@ def getattr_safe(val, *args):
#@ out = None
#@ for arg in args:
#@ if not hasattr(val, arg):
#@ return None
#@ end
#@ out = getattr(val, arg)
#@ val = out
#@ end
#@ return out
#@ end
#@ def hasUnixNetworkEndpoint():
#@ return getattr_safe(data.values.endpoints, "http", "network") == "unix" or \
#@ getattr_safe(data.values.endpoints, "https", "network") == "unix"
#@ end

View File

@ -0,0 +1,319 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: activedirectoryidentityproviders.idp.supervisor.pinniped.dev
spec:
group: idp.supervisor.pinniped.dev
names:
categories:
- pinniped
- pinniped-idp
- pinniped-idps
kind: ActiveDirectoryIdentityProvider
listKind: ActiveDirectoryIdentityProviderList
plural: activedirectoryidentityproviders
singular: activedirectoryidentityprovider
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.host
name: Host
type: string
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: ActiveDirectoryIdentityProvider describes the configuration of
an upstream Microsoft Active Directory identity provider.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec for configuring the identity provider.
properties:
bind:
description: Bind contains the configuration for how to provide access
credentials during an initial bind to the ActiveDirectory server
to be allowed to perform searches and binds to validate a user's
credentials during a user's authentication attempt.
properties:
secretName:
description: SecretName contains the name of a namespace-local
Secret object that provides the username and password for an
Active Directory bind user. This account will be used to perform
LDAP searches. The Secret should be of type "kubernetes.io/basic-auth"
which includes "username" and "password" keys. The username
value should be the full dn (distinguished name) of your bind
account, e.g. "cn=bind-account,ou=users,dc=example,dc=com".
The password must be non-empty.
minLength: 1
type: string
required:
- secretName
type: object
groupSearch:
description: GroupSearch contains the configuration for searching
for a user's group membership in ActiveDirectory.
properties:
attributes:
description: Attributes specifies how the group's information
should be read from each ActiveDirectory entry which was found
as the result of the group search.
properties:
groupName:
description: GroupName specifies the name of the attribute
in the Active Directory entries whose value shall become
a group name in the user's list of groups after a successful
authentication. The value of this field is case-sensitive
and must match the case of the attribute name returned by
the ActiveDirectory server in the user's entry. E.g. "cn"
for common name. Distinguished names can be used by specifying
lower-case "dn". Optional. When not specified, this defaults
to a custom field that looks like "sAMAccountName@domain",
where domain is constructed from the domain components of
the group DN.
type: string
type: object
base:
description: Base is the dn (distinguished name) that should be
used as the search base when searching for groups. E.g. "ou=groups,dc=example,dc=com".
Optional, when not specified it will be based on the result
of a query for the defaultNamingContext (see https://docs.microsoft.com/en-us/windows/win32/adschema/rootdse).
The default behavior searches your entire domain for groups.
It may make sense to specify a subtree as a search base if you
wish to exclude some groups for security reasons or to make
searches faster.
type: string
filter:
description: Filter is the ActiveDirectory search filter which
should be applied when searching for groups for a user. The
pattern "{}" must occur in the filter at least once and will
be dynamically replaced by the value of an attribute of the
user entry found as a result of the user search. Which attribute's
value is used to replace the placeholder(s) depends on the value
of UserAttributeForFilter. E.g. "member={}" or "&(objectClass=groupOfNames)(member={})".
For more information about ActiveDirectory filters, see https://ldap.com/ldap-filters.
Note that the dn (distinguished name) is not an attribute of
an entry, so "dn={}" cannot be used. Optional. When not specified,
the default will act as if the filter were specified as "(&(objectClass=group)(member:1.2.840.113556.1.4.1941:={})".
This searches nested groups by default. Note that nested group
search can be slow for some Active Directory servers. To disable
it, you can set the filter to "(&(objectClass=group)(member={})"
type: string
skipGroupRefresh:
description: "The user's group membership is refreshed as they
interact with the supervisor to obtain new credentials (as their
old credentials expire). This allows group membership changes
to be quickly reflected into Kubernetes clusters. Since group
membership is often used to bind authorization policies, it
is important to keep the groups observed in Kubernetes clusters
in-sync with the identity provider. \n In some environments,
frequent group membership queries may result in a significant
performance impact on the identity provider and/or the supervisor.
The best approach to handle performance impacts is to tweak
the group query to be more performant, for example by disabling
nested group search or by using a more targeted group search
base. \n If the group search query cannot be made performant
and you are willing to have group memberships remain static
for approximately a day, then set skipGroupRefresh to true.
\ This is an insecure configuration as authorization policies
that are bound to group membership will not notice if a user
has been removed from a particular group until their next login.
\n This is an experimental feature that may be removed or significantly
altered in the future. Consumers of this configuration should
carefully read all release notes before upgrading to ensure
that the meaning of this field has not changed."
type: boolean
userAttributeForFilter:
description: UserAttributeForFilter specifies which attribute's
value from the user entry found as a result of the user search
will be used to replace the "{}" placeholder(s) in the group
search Filter. For example, specifying "uid" as the UserAttributeForFilter
while specifying "&(objectClass=posixGroup)(memberUid={})" as
the Filter would search for groups by replacing the "{}" placeholder
in the Filter with the value of the user's "uid" attribute.
Optional. When not specified, the default will act as if "dn"
were specified. For example, leaving UserAttributeForFilter
unspecified while specifying "&(objectClass=groupOfNames)(member={})"
as the Filter would search for groups by replacing the "{}"
placeholder(s) with the dn (distinguished name) of the user.
type: string
type: object
host:
description: 'Host is the hostname of this Active Directory identity
provider, i.e., where to connect. For example: ldap.example.com:636.'
minLength: 1
type: string
tls:
description: TLS contains the connection settings for how to establish
the connection to the Host.
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM bundle).
If omitted, a default set of system roots will be trusted.
type: string
type: object
userSearch:
description: UserSearch contains the configuration for searching for
a user by name in Active Directory.
properties:
attributes:
description: Attributes specifies how the user's information should
be read from the ActiveDirectory entry which was found as the
result of the user search.
properties:
uid:
description: UID specifies the name of the attribute in the
ActiveDirectory entry which whose value shall be used to
uniquely identify the user within this ActiveDirectory provider
after a successful authentication. Optional, when empty
this defaults to "objectGUID".
type: string
username:
description: Username specifies the name of the attribute
in Active Directory entry whose value shall become the username
of the user after a successful authentication. Optional,
when empty this defaults to "userPrincipalName".
type: string
type: object
base:
description: Base is the dn (distinguished name) that should be
used as the search base when searching for users. E.g. "ou=users,dc=example,dc=com".
Optional, when not specified it will be based on the result
of a query for the defaultNamingContext (see https://docs.microsoft.com/en-us/windows/win32/adschema/rootdse).
The default behavior searches your entire domain for users.
It may make sense to specify a subtree as a search base if you
wish to exclude some users or to make searches faster.
type: string
filter:
description: Filter is the search filter which should be applied
when searching for users. The pattern "{}" must occur in the
filter at least once and will be dynamically replaced by the
username for which the search is being run. E.g. "mail={}" or
"&(objectClass=person)(uid={})". For more information about
LDAP filters, see https://ldap.com/ldap-filters. Note that the
dn (distinguished name) is not an attribute of an entry, so
"dn={}" cannot be used. Optional. When not specified, the default
will be '(&(objectClass=person)(!(objectClass=computer))(!(showInAdvancedViewOnly=TRUE))(|(sAMAccountName={}")(mail={})(userPrincipalName={})(sAMAccountType=805306368))'
This means that the user is a person, is not a computer, the
sAMAccountType is for a normal user account, and is not shown
in advanced view only (which would likely mean its a system
created service account with advanced permissions). Also, either
the sAMAccountName, the userPrincipalName, or the mail attribute
matches the input username.
type: string
type: object
required:
- host
type: object
status:
description: Status of the identity provider.
properties:
conditions:
description: Represents the observations of an identity provider's
current state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
phase:
default: Pending
description: Phase summarizes the overall status of the ActiveDirectoryIdentityProvider.
enum:
- Pending
- Ready
- Error
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,316 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: ldapidentityproviders.idp.supervisor.pinniped.dev
spec:
group: idp.supervisor.pinniped.dev
names:
categories:
- pinniped
- pinniped-idp
- pinniped-idps
kind: LDAPIdentityProvider
listKind: LDAPIdentityProviderList
plural: ldapidentityproviders
singular: ldapidentityprovider
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.host
name: Host
type: string
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: LDAPIdentityProvider describes the configuration of an upstream
Lightweight Directory Access Protocol (LDAP) identity provider.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec for configuring the identity provider.
properties:
bind:
description: Bind contains the configuration for how to provide access
credentials during an initial bind to the LDAP server to be allowed
to perform searches and binds to validate a user's credentials during
a user's authentication attempt.
properties:
secretName:
description: SecretName contains the name of a namespace-local
Secret object that provides the username and password for an
LDAP bind user. This account will be used to perform LDAP searches.
The Secret should be of type "kubernetes.io/basic-auth" which
includes "username" and "password" keys. The username value
should be the full dn (distinguished name) of your bind account,
e.g. "cn=bind-account,ou=users,dc=example,dc=com". The password
must be non-empty.
minLength: 1
type: string
required:
- secretName
type: object
groupSearch:
description: GroupSearch contains the configuration for searching
for a user's group membership in the LDAP provider.
properties:
attributes:
description: Attributes specifies how the group's information
should be read from each LDAP entry which was found as the result
of the group search.
properties:
groupName:
description: GroupName specifies the name of the attribute
in the LDAP entries whose value shall become a group name
in the user's list of groups after a successful authentication.
The value of this field is case-sensitive and must match
the case of the attribute name returned by the LDAP server
in the user's entry. E.g. "cn" for common name. Distinguished
names can be used by specifying lower-case "dn". Optional.
When not specified, the default will act as if the GroupName
were specified as "dn" (distinguished name).
type: string
type: object
base:
description: Base is the dn (distinguished name) that should be
used as the search base when searching for groups. E.g. "ou=groups,dc=example,dc=com".
When not specified, no group search will be performed and authenticated
users will not belong to any groups from the LDAP provider.
Also, when not specified, the values of Filter, UserAttributeForFilter,
Attributes, and SkipGroupRefresh are ignored.
type: string
filter:
description: Filter is the LDAP search filter which should be
applied when searching for groups for a user. The pattern "{}"
must occur in the filter at least once and will be dynamically
replaced by the value of an attribute of the user entry found
as a result of the user search. Which attribute's value is used
to replace the placeholder(s) depends on the value of UserAttributeForFilter.
For more information about LDAP filters, see https://ldap.com/ldap-filters.
Note that the dn (distinguished name) is not an attribute of
an entry, so "dn={}" cannot be used. Optional. When not specified,
the default will act as if the Filter were specified as "member={}".
type: string
skipGroupRefresh:
description: "The user's group membership is refreshed as they
interact with the supervisor to obtain new credentials (as their
old credentials expire). This allows group membership changes
to be quickly reflected into Kubernetes clusters. Since group
membership is often used to bind authorization policies, it
is important to keep the groups observed in Kubernetes clusters
in-sync with the identity provider. \n In some environments,
frequent group membership queries may result in a significant
performance impact on the identity provider and/or the supervisor.
The best approach to handle performance impacts is to tweak
the group query to be more performant, for example by disabling
nested group search or by using a more targeted group search
base. \n If the group search query cannot be made performant
and you are willing to have group memberships remain static
for approximately a day, then set skipGroupRefresh to true.
\ This is an insecure configuration as authorization policies
that are bound to group membership will not notice if a user
has been removed from a particular group until their next login.
\n This is an experimental feature that may be removed or significantly
altered in the future. Consumers of this configuration should
carefully read all release notes before upgrading to ensure
that the meaning of this field has not changed."
type: boolean
userAttributeForFilter:
description: UserAttributeForFilter specifies which attribute's
value from the user entry found as a result of the user search
will be used to replace the "{}" placeholder(s) in the group
search Filter. For example, specifying "uid" as the UserAttributeForFilter
while specifying "&(objectClass=posixGroup)(memberUid={})" as
the Filter would search for groups by replacing the "{}" placeholder
in the Filter with the value of the user's "uid" attribute.
Optional. When not specified, the default will act as if "dn"
were specified. For example, leaving UserAttributeForFilter
unspecified while specifying "&(objectClass=groupOfNames)(member={})"
as the Filter would search for groups by replacing the "{}"
placeholder(s) with the dn (distinguished name) of the user.
type: string
type: object
host:
description: 'Host is the hostname of this LDAP identity provider,
i.e., where to connect. For example: ldap.example.com:636.'
minLength: 1
type: string
tls:
description: TLS contains the connection settings for how to establish
the connection to the Host.
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM bundle).
If omitted, a default set of system roots will be trusted.
type: string
type: object
userSearch:
description: UserSearch contains the configuration for searching for
a user by name in the LDAP provider.
properties:
attributes:
description: Attributes specifies how the user's information should
be read from the LDAP entry which was found as the result of
the user search.
properties:
uid:
description: UID specifies the name of the attribute in the
LDAP entry which whose value shall be used to uniquely identify
the user within this LDAP provider after a successful authentication.
E.g. "uidNumber" or "objectGUID". The value of this field
is case-sensitive and must match the case of the attribute
name returned by the LDAP server in the user's entry. Distinguished
names can be used by specifying lower-case "dn".
minLength: 1
type: string
username:
description: Username specifies the name of the attribute
in the LDAP entry whose value shall become the username
of the user after a successful authentication. This would
typically be the same attribute name used in the user search
filter, although it can be different. E.g. "mail" or "uid"
or "userPrincipalName". The value of this field is case-sensitive
and must match the case of the attribute name returned by
the LDAP server in the user's entry. Distinguished names
can be used by specifying lower-case "dn". When this field
is set to "dn" then the LDAPIdentityProviderUserSearch's
Filter field cannot be blank, since the default value of
"dn={}" would not work.
minLength: 1
type: string
type: object
base:
description: Base is the dn (distinguished name) that should be
used as the search base when searching for users. E.g. "ou=users,dc=example,dc=com".
minLength: 1
type: string
filter:
description: Filter is the LDAP search filter which should be
applied when searching for users. The pattern "{}" must occur
in the filter at least once and will be dynamically replaced
by the username for which the search is being run. E.g. "mail={}"
or "&(objectClass=person)(uid={})". For more information about
LDAP filters, see https://ldap.com/ldap-filters. Note that the
dn (distinguished name) is not an attribute of an entry, so
"dn={}" cannot be used. Optional. When not specified, the default
will act as if the Filter were specified as the value from Attributes.Username
appended by "={}". When the Attributes.Username is set to "dn"
then the Filter must be explicitly specified, since the default
value of "dn={}" would not work.
type: string
type: object
required:
- host
type: object
status:
description: Status of the identity provider.
properties:
conditions:
description: Represents the observations of an identity provider's
current state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
phase:
default: Pending
description: Phase summarizes the overall status of the LDAPIdentityProvider.
enum:
- Pending
- Ready
- Error
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,346 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: oidcidentityproviders.idp.supervisor.pinniped.dev
spec:
group: idp.supervisor.pinniped.dev
names:
categories:
- pinniped
- pinniped-idp
- pinniped-idps
kind: OIDCIdentityProvider
listKind: OIDCIdentityProviderList
plural: oidcidentityproviders
singular: oidcidentityprovider
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.issuer
name: Issuer
type: string
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: OIDCIdentityProvider describes the configuration of an upstream
OpenID Connect identity provider.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: Spec for configuring the identity provider.
properties:
authorizationConfig:
description: AuthorizationConfig holds information about how to form
the OAuth2 authorization request parameters to be used with this
OIDC identity provider.
properties:
additionalAuthorizeParameters:
description: additionalAuthorizeParameters are extra query parameters
that should be included in the authorize request to your OIDC
provider in the authorization request during an OIDC Authorization
Code Flow. By default, no extra parameters are sent. The standard
parameters that will be sent are "response_type", "scope", "client_id",
"state", "nonce", "code_challenge", "code_challenge_method",
and "redirect_uri". These parameters cannot be included in this
setting. Additionally, the "hd" parameter cannot be included
in this setting at this time. The "hd" parameter is used by
Google's OIDC provider to provide a hint as to which "hosted
domain" the user should use during login. However, Pinniped
does not yet support validating the hosted domain in the resulting
ID token, so it is not yet safe to use this feature of Google's
OIDC provider with Pinniped. This setting does not influence
the parameters sent to the token endpoint in the Resource Owner
Password Credentials Grant. The Pinniped Supervisor requires
that your OIDC provider returns refresh tokens to the Supervisor
from the authorization flows. Some OIDC providers may require
a certain value for the "prompt" parameter in order to properly
request refresh tokens. See the documentation of your OIDC provider's
authorization endpoint for its requirements for what to include
in the request in order to receive a refresh token in the response,
if anything. If your provider requires the prompt parameter
to request a refresh token, then include it here. Also note
that most providers also require a certain scope to be requested
in order to receive refresh tokens. See the additionalScopes
setting for more information about using scopes to request refresh
tokens.
items:
description: Parameter is a key/value pair which represents
a parameter in an HTTP request.
properties:
name:
description: The name of the parameter. Required.
minLength: 1
type: string
value:
description: The value of the parameter.
type: string
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
additionalScopes:
description: 'additionalScopes are the additional scopes that
will be requested from your OIDC provider in the authorization
request during an OIDC Authorization Code Flow and in the token
request during a Resource Owner Password Credentials Grant.
Note that the "openid" scope will always be requested regardless
of the value in this setting, since it is always required according
to the OIDC spec. By default, when this field is not set, the
Supervisor will request the following scopes: "openid", "offline_access",
"email", and "profile". See https://openid.net/specs/openid-connect-core-1_0.html#ScopeClaims
for a description of the "profile" and "email" scopes. See https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess
for a description of the "offline_access" scope. This default
value may change in future versions of Pinniped as the standard
evolves, or as common patterns used by providers who implement
the standard in the ecosystem evolve. By setting this list to
anything other than an empty list, you are overriding the default
value, so you may wish to include some of "offline_access",
"email", and "profile" in your override list. If you do not
want any of these scopes to be requested, you may set this list
to contain only "openid". Some OIDC providers may also require
a scope to get access to the user''s group membership, in which
case you may wish to include it in this list. Sometimes the
scope to request the user''s group membership is called "groups",
but unfortunately this is not specified in the OIDC standard.
Generally speaking, you should include any scopes required to
cause the appropriate claims to be the returned by your OIDC
provider in the ID token or userinfo endpoint results for those
claims which you would like to use in the oidcClaims settings
to determine the usernames and group memberships of your Kubernetes
users. See your OIDC provider''s documentation for more information
about what scopes are available to request claims. Additionally,
the Pinniped Supervisor requires that your OIDC provider returns
refresh tokens to the Supervisor from these authorization flows.
For most OIDC providers, the scope required to receive refresh
tokens will be "offline_access". See the documentation of your
OIDC provider''s authorization and token endpoints for its requirements
for what to include in the request in order to receive a refresh
token in the response, if anything. Note that it may be safe
to send "offline_access" even to providers which do not require
it, since the provider may ignore scopes that it does not understand
or require (see https://datatracker.ietf.org/doc/html/rfc6749#section-3.3).
In the unusual case that you must avoid sending the "offline_access"
scope, then you must override the default value of this setting.
This is required if your OIDC provider will reject the request
when it includes "offline_access" (e.g. GitLab''s OIDC provider).'
items:
type: string
type: array
allowPasswordGrant:
description: allowPasswordGrant, when true, will allow the use
of OAuth 2.0's Resource Owner Password Credentials Grant (see
https://datatracker.ietf.org/doc/html/rfc6749#section-4.3) to
authenticate to the OIDC provider using a username and password
without a web browser, in addition to the usual browser-based
OIDC Authorization Code Flow. The Resource Owner Password Credentials
Grant is not officially part of the OIDC specification, so it
may not be supported by your OIDC provider. If your OIDC provider
supports returning ID tokens from a Resource Owner Password
Credentials Grant token request, then you can choose to set
this field to true. This will allow end users to choose to present
their username and password to the kubectl CLI (using the Pinniped
plugin) to authenticate to the cluster, without using a web
browser to log in as is customary in OIDC Authorization Code
Flow. This may be convenient for users, especially for identities
from your OIDC provider which are not intended to represent
a human actor, such as service accounts performing actions in
a CI/CD environment. Even if your OIDC provider supports it,
you may wish to disable this behavior by setting this field
to false when you prefer to only allow users of this OIDCIdentityProvider
to log in via the browser-based OIDC Authorization Code Flow.
Using the Resource Owner Password Credentials Grant means that
the Pinniped CLI and Pinniped Supervisor will directly handle
your end users' passwords (similar to LDAPIdentityProvider),
and you will not be able to require multi-factor authentication
or use the other web-based login features of your OIDC provider
during Resource Owner Password Credentials Grant logins. allowPasswordGrant
defaults to false.
type: boolean
type: object
claims:
description: Claims provides the names of token claims that will be
used when inspecting an identity from this OIDC identity provider.
properties:
additionalClaimMappings:
additionalProperties:
type: string
description: AdditionalClaimMappings allows for additional arbitrary
upstream claim values to be mapped into the "additionalClaims"
claim of the ID tokens generated by the Supervisor. This should
be specified as a map of new claim names as the keys, and upstream
claim names as the values. These new claim names will be nested
under the top-level "additionalClaims" claim in ID tokens generated
by the Supervisor when this OIDCIdentityProvider was used for
user authentication. These claims will be made available to
all clients. This feature is not required to use the Supervisor
to provide authentication for Kubernetes clusters, but can be
used when using the Supervisor for other authentication purposes.
When this map is empty or the upstream claims are not available,
the "additionalClaims" claim will be excluded from the ID tokens
generated by the Supervisor.
type: object
groups:
description: Groups provides the name of the ID token claim or
userinfo endpoint response claim that will be used to ascertain
the groups to which an identity belongs. By default, the identities
will not include any group memberships when this setting is
not configured.
type: string
username:
description: Username provides the name of the ID token claim
or userinfo endpoint response claim that will be used to ascertain
an identity's username. When not set, the username will be an
automatically constructed unique string which will include the
issuer URL of your OIDC provider along with the value of the
"sub" (subject) claim from the ID token.
type: string
type: object
client:
description: OIDCClient contains OIDC client information to be used
used with this OIDC identity provider.
properties:
secretName:
description: SecretName contains the name of a namespace-local
Secret object that provides the clientID and clientSecret for
an OIDC client. If only the SecretName is specified in an OIDCClient
struct, then it is expected that the Secret is of type "secrets.pinniped.dev/oidc-client"
with keys "clientID" and "clientSecret".
type: string
required:
- secretName
type: object
issuer:
description: Issuer is the issuer URL of this OIDC identity provider,
i.e., where to fetch /.well-known/openid-configuration.
minLength: 1
pattern: ^https://
type: string
tls:
description: TLS configuration for discovery/JWKS requests to the
issuer.
properties:
certificateAuthorityData:
description: X.509 Certificate Authority (base64-encoded PEM bundle).
If omitted, a default set of system roots will be trusted.
type: string
type: object
required:
- client
- issuer
type: object
status:
description: Status of the identity provider.
properties:
conditions:
description: Represents the observations of an identity provider's
current state.
items:
description: Condition status of a resource (mirrored from the metav1.Condition
type added in Kubernetes 1.19). In a future API version we can
switch to using the upstream type. See https://github.com/kubernetes/apimachinery/blob/v0.19.0/pkg/apis/meta/v1/types.go#L1353-L1413.
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
phase:
default: Pending
description: Phase summarizes the overall status of the OIDCIdentityProvider.
enum:
- Pending
- Ready
- Error
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,152 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("helpers.lib.yaml", "labels", "namespace", "defaultResourceName", "defaultResourceNameWithSuffix", "pinnipedDevAPIGroupWithPrefix")
#! Give permission to various objects within the app's own namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
rules:
- apiGroups: [""]
resources: [secrets]
verbs: [create, get, list, patch, update, watch, delete]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
resources: [federationdomains]
verbs: [get, list, watch]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
resources: [federationdomains/status]
verbs: [get, patch, update]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
resources: [oidcclients]
verbs: [get, list, watch]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
resources: [oidcclients/status]
verbs: [get, patch, update]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [oidcidentityproviders]
verbs: [get, list, watch]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [oidcidentityproviders/status]
verbs: [get, patch, update]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [ldapidentityproviders]
verbs: [get, list, watch]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [ldapidentityproviders/status]
verbs: [get, patch, update]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [activedirectoryidentityproviders]
verbs: [get, list, watch]
- apiGroups:
- #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
resources: [activedirectoryidentityproviders/status]
verbs: [get, patch, update]
#! We want to be able to read pods/replicasets/deployment so we can learn who our deployment is to set
#! as an owner reference.
- apiGroups: [""]
resources: [pods]
verbs: [get]
- apiGroups: [apps]
resources: [replicasets,deployments]
verbs: [get]
- apiGroups: [ coordination.k8s.io ]
resources: [ leases ]
verbs: [ create, get, update ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceName()
namespace: #@ namespace()
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: #@ defaultResourceName()
apiGroup: rbac.authorization.k8s.io
#! Give permissions for a special configmap of CA bundles that is needed by aggregated api servers
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("extension-apiserver-authentication-reader")
namespace: kube-system
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: Role
name: extension-apiserver-authentication-reader
apiGroup: rbac.authorization.k8s.io
#! Give permissions for subjectaccessreviews, tokenreview that is needed by aggregated api servers
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceName()
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: ClusterRole
name: system:auth-delegator
apiGroup: rbac.authorization.k8s.io
#! Give permission to various cluster-scoped objects
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
labels: #@ labels()
rules:
- apiGroups: [ "" ]
resources: [ namespaces ]
verbs: [ get, list, watch ]
- apiGroups: [ apiregistration.k8s.io ]
resources: [ apiservices ]
verbs: [ get, list, patch, update, watch ]
- apiGroups: [ admissionregistration.k8s.io ]
resources: [ validatingwebhookconfigurations, mutatingwebhookconfigurations ]
verbs: [ get, list, watch ]
- apiGroups: [ flowcontrol.apiserver.k8s.io ]
resources: [ flowschemas, prioritylevelconfigurations ]
verbs: [ get, list, watch ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
labels: #@ labels()
subjects:
- kind: ServiceAccount
name: #@ defaultResourceName()
namespace: #@ namespace()
roleRef:
kind: ClusterRole
name: #@ defaultResourceNameWithSuffix("aggregated-api-server")
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,115 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:data", "data")
#@ load("@ytt:assert", "assert")
#@ load("helpers.lib.yaml", "labels", "deploymentPodLabel", "namespace", "defaultResourceName", "defaultResourceNameWithSuffix")
#@ if hasattr(data.values, "service_http_nodeport_port"):
#@ assert.fail('value "service_http_nodeport_port" has been renamed to "deprecated_service_http_nodeport_port" and will be removed in a future release')
#@ end
#@ if hasattr(data.values, "service_http_nodeport_nodeport"):
#@ assert.fail('value "service_http_nodeport_nodeport" has been renamed to "deprecated_service_http_nodeport_nodeport" and will be removed in a future release')
#@ end
#@ if hasattr(data.values, "service_http_loadbalancer_port"):
#@ assert.fail('value "service_http_loadbalancer_port" has been renamed to "deprecated_service_http_loadbalancer_port" and will be removed in a future release')
#@ end
#@ if hasattr(data.values, "service_http_clusterip_port"):
#@ assert.fail('value "service_http_clusterip_port" has been renamed to "deprecated_service_http_clusterip_port" and will be removed in a future release')
#@ end
#@ if data.values.deprecated_service_http_nodeport_port or data.values.service_https_nodeport_port:
---
apiVersion: v1
kind: Service
metadata:
name: #@ defaultResourceNameWithSuffix("nodeport")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: NodePort
selector: #@ deploymentPodLabel()
ports:
#@ if data.values.deprecated_service_http_nodeport_port:
- name: http
protocol: TCP
port: #@ data.values.deprecated_service_http_nodeport_port
targetPort: 8080
#@ if data.values.deprecated_service_http_nodeport_nodeport:
nodePort: #@ data.values.deprecated_service_http_nodeport_nodeport
#@ end
#@ end
#@ if data.values.service_https_nodeport_port:
- name: https
protocol: TCP
port: #@ data.values.service_https_nodeport_port
targetPort: 8443
#@ if data.values.service_https_nodeport_nodeport:
nodePort: #@ data.values.service_https_nodeport_nodeport
#@ end
#@ end
#@ end
#@ if data.values.deprecated_service_http_clusterip_port or data.values.service_https_clusterip_port:
---
apiVersion: v1
kind: Service
metadata:
name: #@ defaultResourceNameWithSuffix("clusterip")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: ClusterIP
selector: #@ deploymentPodLabel()
ports:
#@ if data.values.deprecated_service_http_clusterip_port:
- name: http
protocol: TCP
port: #@ data.values.deprecated_service_http_clusterip_port
targetPort: 8080
#@ end
#@ if data.values.service_https_clusterip_port:
- name: https
protocol: TCP
port: #@ data.values.service_https_clusterip_port
targetPort: 8443
#@ end
#@ end
#@ if data.values.deprecated_service_http_loadbalancer_port or data.values.service_https_loadbalancer_port:
---
apiVersion: v1
kind: Service
metadata:
name: #@ defaultResourceNameWithSuffix("loadbalancer")
namespace: #@ namespace()
labels: #@ labels()
#! prevent kapp from altering the selector of our services to match kubectl behavior
annotations:
kapp.k14s.io/disable-default-label-scoping-rules: ""
spec:
type: LoadBalancer
selector: #@ deploymentPodLabel()
#@ if data.values.service_loadbalancer_ip:
loadBalancerIP: #@ data.values.service_loadbalancer_ip
#@ end
ports:
#@ if data.values.deprecated_service_http_loadbalancer_port:
- name: http
protocol: TCP
port: #@ data.values.deprecated_service_http_loadbalancer_port
targetPort: 8080
#@ end
#@ if data.values.service_https_loadbalancer_port:
- name: https
protocol: TCP
port: #@ data.values.service_https_loadbalancer_port
targetPort: 8443
#@ end
#@ end

View File

@ -0,0 +1,170 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@data/values-schema
---
#@schema/desc "Namespace of pinniped-supervisor"
app_name: pinniped-supervisor
#@schema/desc "Creates a new namespace statically in yaml with the given name and installs the app into that namespace."
namespace: pinniped-supervisor
#! If specified, assumes that a namespace of the given name already exists and installs the app into that namespace.
#! If both `namespace` and `into_namespace` are specified, then only `into_namespace` is used.
#@schema/desc "Overrides namespace. This is actually confusingly worded. TODO: CAN WE REWRITE THIS ONE???"
#@schema/nullable
into_namespace: my-preexisting-namespace
#! All resources created statically by yaml at install-time and all resources created dynamically
#! by controllers at runtime will be labelled with `app: $app_name` and also with the labels
#! specified here. The value of `custom_labels` must be a map of string keys to string values.
#! The app can be uninstalled either by:
#! 1. Deleting the static install-time yaml resources including the static namespace, which will cascade and also delete
#! resources that were dynamically created by controllers at runtime
#! 2. Or, deleting all resources by label, which does not assume that there was a static install-time yaml namespace.
#@schema/desc "All resources created statically by yaml at install-time and all resources created dynamically by controllers at runtime will be labelled with `app: $app_name` and also with the labels specified here."
custom_labels: {} #! e.g. {myCustomLabelName: myCustomLabelValue, otherCustomLabelName: otherCustomLabelValue}
#@schema/desc "Specify how many replicas of the Pinniped server to run."
replicas: 2
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
image_repo: projects.registry.vmware.com/pinniped/pinniped-server
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
#@schema/nullable
image_digest: sha256:f3c4fdfd3ef865d4b97a1fd295d94acc3f0c654c46b6f27ffad5cf80216903c8
#@schema/desc "Specify either an image_digest or an image_tag. If both are given, only image_digest will be used."
image_tag: latest
#! TODO: preserve even if this file is revised!
#@schema/nullable
package_image_repo: docker.io/benjaminapetersen/some-pinniped-supervisor-package
#@schema/nullable
package_image_digest: sha256:123456
#@schema/nullable
package_image_tag: latest #! TODO; so since we allow the deployment to pass in the image tag, not sure the value in versions of packages?
#! prob should not be nullable
#@schema/nullable
package_version: 1.2.3
#! Specifies a secret to be used when pulling the above `image_repo` container image.
#! Can be used when the above image_repo is a private registry.
#! Typically the value would be the output of: kubectl create secret docker-registry x --docker-server=https://example.io --docker-username="USERNAME" --docker-password="PASSWORD" --dry-run=client -o json | jq -r '.data[".dockerconfigjson"]'
#! Optional.
#@schema/description "Specifies a secret to be used when pulling the above `image_repo` container image. Can be used when the image_repo is a private registry."
#@schema/nullable
image_pull_dockerconfigjson: {"auths":{"https://registry.example.com":{"username":"USERNAME","password":"PASSWORD","auth":"BASE64_ENCODED_USERNAME_COLON_PASSWORD"}}}
#! Specify how to expose the Supervisor app's HTTPS port as a Service.
#! Typically, you would set a value for only one of the following service types.
#! Setting any of these values means that a Service of that type will be created. They are all optional.
#! Note that all port numbers should be numbers (not strings), i.e. use ytt's `--data-value-yaml` instead of `--data-value`.
#! Several of these values have been deprecated and will be removed in a future release. Their names have been changed to
#! mark them as deprecated and to make it obvious upon upgrade to anyone who was using them that they have been deprecated.
#@schema/desc "will be removed in a future release; when specified, creates a NodePort Service with this `port` value, with port 8080 as its `targetPort`; e.g. 31234"
#@schema/nullable
deprecated_service_http_nodeport_port: 31234
#@schema/desc "will be removed in a future release; the `nodePort` value of the NodePort Service, optional when `deprecated_service_http_nodeport_port` is specified; e.g. 31234"
#@schema/nullable
deprecated_service_http_nodeport_nodeport: 31234
#@schema/desc "will be removed in a future release; when specified, creates a LoadBalancer Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443"
#@schema/nullable
deprecated_service_http_loadbalancer_port: 8443
#@schema/desc "will be removed in a future release; when specified, creates a ClusterIP Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443"
#@schema/nullable
deprecated_service_http_clusterip_port: 8443
#@schema/desc "when specified, creates a NodePort Service with this `port` value, with port 8443 as its `targetPort`; e.g. 31243"
#@schema/nullable
service_https_nodeport_port: 31243
#@schema/desc "the `nodePort` value of the NodePort Service, optional when `service_https_nodeport_port` is specified; e.g. 31243"
#@schema/nullable
service_https_nodeport_nodeport: 31243
#@schema/desc "when specified, creates a LoadBalancer Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443"
#@schema/nullable
service_https_loadbalancer_port: 8443
#@schema/desc "when specified, creates a ClusterIP Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443"
#@schema/nullable
service_https_clusterip_port: 8443
#! Optional.
#@schema/desc "The `loadBalancerIP` value of the LoadBalancer Service. Ignored unless service_https_loadbalancer_port is provided. e.g. 1.2.3.4"
#@schema/nullable
service_loadbalancer_ip: 1.2.3.4
#@schema/desc 'Specify the verbosity of logging: info ("nice to know" information), debug (developer information), trace (timing information), or all (kitchen sink). Do not use trace or all on production systems, as credentials may get logged.'
#@schema/nullable
log_level: info #! By default, when this value is left unset, only warnings and errors are printed. There is no way to suppress warning and error logs.
#@schema/desc "Specify the format of logging: json (for machine parsable logs) and text (for legacy klog formatted logs). By default, when this value is left unset, logs are formatted in json. This configuration is deprecated and will be removed in a future release at which point logs will always be formatted as json."
#@schema/nullable
deprecated_log_format: json
#@schema/desc "run_as_user specifies the user ID that will own the process, see the Dockerfile for the reasoning behind this choice"
run_as_user: 65532
#@schema/desc "run_as_group specifies the group ID that will own the process, see the Dockerfile for the reasoning behind this choice"
run_as_group: 65532
#@schema/desc "Specify the API group suffix for all Pinniped API groups. By default, this is set to pinniped.dev, so Pinniped API groups will look like foo.pinniped.dev, authentication.concierge.pinniped.dev, etc. As an example, if this is set to tuna.io, then Pinniped API groups will look like foo.tuna.io. authentication.concierge.tuna.io, etc."
api_group_suffix: pinniped.dev
#! Optional.
#@schema/desc "Set the standard golang HTTPS_PROXY and NO_PROXY environment variables on the Supervisor containers. These will be used when the Supervisor makes backend-to-backend calls to upstream identity providers using HTTPS, e.g. when the Supervisor fetches discovery documents, JWKS keys, and tokens from an upstream OIDC Provider. The Supervisor never makes insecure HTTP calls, so there is no reason to set HTTP_PROXY."
#@schema/nullable
https_proxy: http://proxy.example.com
#@schema/desc "NO_PROXY environment variable. do not proxy Kubernetes endpoints"
no_proxy: "$(KUBERNETES_SERVICE_HOST),169.254.169.254,127.0.0.1,localhost,.svc,.cluster.local"
#! Control the HTTP and HTTPS listeners of the Supervisor.
#!
#! The schema of this config is as follows:
#!
#! endpoints:
#! https:
#! network: tcp | unix | disabled
#! address: host:port when network=tcp or /pinniped_socket/socketfile.sock when network=unix
#! http:
#! network: same as above
#! address: same as above, except that when network=tcp then the address is only allowed to bind to loopback interfaces
#!
#! Setting network to disabled turns off that particular listener.
#! See https://pkg.go.dev/net#Listen and https://pkg.go.dev/net#Dial for a description of what can be
#! specified in the address parameter based on the given network parameter. To aid in the use of unix
#! domain sockets, a writable empty dir volume is mounted at /pinniped_socket when network is set to "unix."
#!
#! The current defaults are:
#!
#! endpoints:
#! https:
#! network: tcp
#! address: :8443
#! http:
#! network: disabled
#!
#! These defaults mean: For HTTPS listening, bind to all interfaces using TCP on port 8443.
#! Disable HTTP listening by default.
#!
#! The HTTP listener can only be bound to loopback interfaces. This allows the listener to accept
#! traffic from within the pod, e.g. from a service mesh sidecar. The HTTP listener should not be
#! used to accept traffic from outside the pod, since that would mean that the network traffic could be
#! transmitted unencrypted. The HTTPS listener should be used instead to accept traffic from outside the pod.
#! Ingresses and load balancers that terminate TLS connections should re-encrypt the data and route traffic
#! to the HTTPS listener. Unix domain sockets may also be used for integrations with service meshes.
#!
#! Changing the HTTPS port number must be accompanied by matching changes to the service and deployment
#! manifests. Changes to the HTTPS listener must be coordinated with the deployment health checks.
#!
#! Optional.
#@schema/desc "Control the HTTP and HTTPS listeners of the Supervisor."
#@schema/nullable
endpoints:
https:
network: tcp | unix | disabled
address: host:port when network=tcp or /pinniped_socket/socketfile.sock when network=unix
#! Optionally override the validation on the endpoints.http value which checks that only loopback interfaces are used.
#! When deprecated_insecure_accept_external_unencrypted_http_requests is true, the HTTP listener is allowed to bind to any
#! interface, including interfaces that are listening for traffic from outside the pod. This value is being introduced
#! to ease the transition to the new loopback interface validation for the HTTP port for any users who need more time
#! to change their ingress strategy to avoid using plain HTTP into the Supervisor pods.
#! This value is immediately deprecated upon its introduction. It will be removed in some future release, at which time
#! traffic from outside the pod will need to be sent to the HTTPS listener instead, with no simple workaround available.
#! Allowed values are true (boolean), "true" (string), false (boolean), and "false" (string). The default is false.
#! Optional.
#@schema/desc "Optionally override the validation on the endpoints.http value which checks that only loopback interfaces are used."
deprecated_insecure_accept_external_unencrypted_http_requests: false

View File

@ -0,0 +1,63 @@
#! Copyright 2020-2022 the Pinniped contributors. All Rights Reserved.
#! SPDX-License-Identifier: Apache-2.0
#@ load("@ytt:overlay", "overlay")
#@ load("helpers.lib.yaml", "labels", "pinnipedDevAPIGroupWithPrefix")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"federationdomains.config.supervisor.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("federationdomains.config.supervisor")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"oidcidentityproviders.idp.supervisor.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("oidcidentityproviders.idp.supervisor")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"ldapidentityproviders.idp.supervisor.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("ldapidentityproviders.idp.supervisor")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"activedirectoryidentityproviders.idp.supervisor.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("activedirectoryidentityproviders.idp.supervisor")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("idp.supervisor")
#@overlay/match by=overlay.subset({"kind": "CustomResourceDefinition", "metadata":{"name":"oidcclients.config.supervisor.pinniped.dev"}}), expects=1
---
metadata:
#@overlay/match missing_ok=True
labels: #@ labels()
name: #@ pinnipedDevAPIGroupWithPrefix("oidcclients.config.supervisor")
spec:
group: #@ pinnipedDevAPIGroupWithPrefix("config.supervisor")
versions:
#@overlay/match by=overlay.all, expects="1+"
- schema:
openAPIV3Schema:
#@overlay/match by=overlay.subset({"metadata":{"type":"object"}}), expects=1
properties:
metadata:
#@overlay/match missing_ok=True
properties:
name:
pattern: ^client\.oauth\.pinniped\.dev-
type: string

View File

@ -0,0 +1,13 @@
#@ load("@ytt:data", "data") # for reading data values (generated via ytt's data-values-schema-inspect mode).
#@ load("@ytt:yaml", "yaml") # for dynamically decoding the output of ytt's data-values-schema-inspect
---
apiVersion: data.packaging.carvel.dev/v1alpha1
kind: PackageMetadata
metadata:
name: supervisor.pinniped.dev
spec:
displayName: "Pinniped Supervisor"
longDescription: "Pinniped supervisor allows seamless login across one or many Kubernetes clusters including AKS, EKS and GKE"
shortDescription: "Pinniped supervisor provides login capabilities"
categories:
- auth

View File

@ -0,0 +1,32 @@
#@ load("@ytt:data", "data") # for reading data values (generated via ytt's data-values-schema-inspect mode).
#@ load("@ytt:yaml", "yaml") # for dynamically decoding the output of ytt's data-values-schema-inspect
---
apiVersion: data.packaging.carvel.dev/v1alpha1
kind: Package
metadata:
name: #@ "supervisor.pinniped.dev." + data.values.package_version
spec:
refName: supervisor.pinniped.dev
version: #@ data.values.package_version
releaseNotes: |
Initial release of the pinniped supervisor package
licenses:
- "Apache-2.0"
valuesSchema:
openAPIv3: #@ yaml.decode(data.values.openapi)["components"]["schemas"]["dataValues"]
template:
spec:
fetch:
- imgpkgBundle:
#! image: #@ data.values.package_image_repo + "/packages/:" + data.values.package_version
image: #@ data.values.package_image_repo
template:
- ytt:
paths:
- "config/"
- kbld:
paths:
- ".imgpkg/images.yml"
- "-"
deploy:
- kapp: {}

View File

@ -0,0 +1,180 @@
openapi: 3.0.0
info:
version: 0.1.0
title: Schema for data values, generated by ytt
paths: {}
components:
schemas:
dataValues:
type: object
additionalProperties: false
properties:
app_name:
type: string
description: Namespace of pinniped-supervisor
default: pinniped-supervisor
namespace:
type: string
description: Creates a new namespace statically in yaml with the given name and installs the app into that namespace.
default: pinniped-supervisor
into_namespace:
type: string
nullable: true
description: 'Overrides namespace. This is actually confusingly worded. TODO: CAN WE REWRITE THIS ONE???'
default: null
custom_labels:
type: object
additionalProperties: false
description: 'All resources created statically by yaml at install-time and all resources created dynamically by controllers at runtime will be labelled with `app: $app_name` and also with the labels specified here.'
properties: {}
replicas:
type: integer
description: Specify how many replicas of the Pinniped server to run.
default: 2
image_repo:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: projects.registry.vmware.com/pinniped/pinniped-server
image_digest:
type: string
nullable: true
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: null
image_tag:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: latest
package_image_repo:
type: string
nullable: true
default: null
package_image_digest:
type: string
nullable: true
default: null
package_image_tag:
type: string
nullable: true
default: null
package_version:
type: string
nullable: true
default: null
image_pull_dockerconfigjson:
type: object
additionalProperties: false
nullable: true
properties:
auths:
type: object
additionalProperties: false
properties:
https://registry.example.com:
type: object
additionalProperties: false
properties:
username:
type: string
default: USERNAME
password:
type: string
default: PASSWORD
auth:
type: string
default: BASE64_ENCODED_USERNAME_COLON_PASSWORD
deprecated_service_http_nodeport_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a NodePort Service with this `port` value, with port 8080 as its `targetPort`; e.g. 31234
default: null
deprecated_service_http_nodeport_nodeport:
type: integer
nullable: true
description: will be removed in a future release; the `nodePort` value of the NodePort Service, optional when `deprecated_service_http_nodeport_port` is specified; e.g. 31234
default: null
deprecated_service_http_loadbalancer_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a LoadBalancer Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443
default: null
deprecated_service_http_clusterip_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a ClusterIP Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443
default: null
service_https_nodeport_port:
type: integer
nullable: true
description: when specified, creates a NodePort Service with this `port` value, with port 8443 as its `targetPort`; e.g. 31243
default: null
service_https_nodeport_nodeport:
type: integer
nullable: true
description: the `nodePort` value of the NodePort Service, optional when `service_https_nodeport_port` is specified; e.g. 31243
default: null
service_https_loadbalancer_port:
type: integer
nullable: true
description: when specified, creates a LoadBalancer Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443
default: null
service_https_clusterip_port:
type: integer
nullable: true
description: when specified, creates a ClusterIP Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443
default: null
service_loadbalancer_ip:
type: string
nullable: true
description: The `loadBalancerIP` value of the LoadBalancer Service. Ignored unless service_https_loadbalancer_port is provided. e.g. 1.2.3.4
default: null
log_level:
type: string
nullable: true
description: 'Specify the verbosity of logging: info ("nice to know" information), debug (developer information), trace (timing information), or all (kitchen sink). Do not use trace or all on production systems, as credentials may get logged.'
default: null
deprecated_log_format:
type: string
nullable: true
description: 'Specify the format of logging: json (for machine parsable logs) and text (for legacy klog formatted logs). By default, when this value is left unset, logs are formatted in json. This configuration is deprecated and will be removed in a future release at which point logs will always be formatted as json.'
default: null
run_as_user:
type: integer
description: run_as_user specifies the user ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
run_as_group:
type: integer
description: run_as_group specifies the group ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
api_group_suffix:
type: string
description: Specify the API group suffix for all Pinniped API groups. By default, this is set to pinniped.dev, so Pinniped API groups will look like foo.pinniped.dev, authentication.concierge.pinniped.dev, etc. As an example, if this is set to tuna.io, then Pinniped API groups will look like foo.tuna.io. authentication.concierge.tuna.io, etc.
default: pinniped.dev
https_proxy:
type: string
nullable: true
description: Set the standard golang HTTPS_PROXY and NO_PROXY environment variables on the Supervisor containers. These will be used when the Supervisor makes backend-to-backend calls to upstream identity providers using HTTPS, e.g. when the Supervisor fetches discovery documents, JWKS keys, and tokens from an upstream OIDC Provider. The Supervisor never makes insecure HTTP calls, so there is no reason to set HTTP_PROXY.
default: null
no_proxy:
type: string
description: NO_PROXY environment variable. do not proxy Kubernetes endpoints
default: $(KUBERNETES_SERVICE_HOST),169.254.169.254,127.0.0.1,localhost,.svc,.cluster.local
endpoints:
type: object
additionalProperties: false
nullable: true
description: Control the HTTP and HTTPS listeners of the Supervisor.
properties:
https:
type: object
additionalProperties: false
properties:
network:
type: string
default: tcp | unix | disabled
address:
type: string
default: host:port when network=tcp or /pinniped_socket/socketfile.sock when network=unix
deprecated_insecure_accept_external_unencrypted_http_requests:
type: boolean
description: Optionally override the validation on the endpoints.http value which checks that only loopback interfaces are used.
default: false

View File

@ -0,0 +1,180 @@
openapi: 3.0.0
info:
version: 0.1.0
title: Schema for data values, generated by ytt
paths: {}
components:
schemas:
dataValues:
type: object
additionalProperties: false
properties:
app_name:
type: string
description: Namespace of pinniped-supervisor
default: pinniped-supervisor
namespace:
type: string
description: Creates a new namespace statically in yaml with the given name and installs the app into that namespace.
default: pinniped-supervisor
into_namespace:
type: string
nullable: true
description: 'Overrides namespace. This is actually confusingly worded. TODO: CAN WE REWRITE THIS ONE???'
default: null
custom_labels:
type: object
additionalProperties: false
description: 'All resources created statically by yaml at install-time and all resources created dynamically by controllers at runtime will be labelled with `app: $app_name` and also with the labels specified here.'
properties: {}
replicas:
type: integer
description: Specify how many replicas of the Pinniped server to run.
default: 2
image_repo:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: projects.registry.vmware.com/pinniped/pinniped-server
image_digest:
type: string
nullable: true
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: null
image_tag:
type: string
description: Specify either an image_digest or an image_tag. If both are given, only image_digest will be used.
default: latest
package_image_repo:
type: string
nullable: true
default: null
package_image_digest:
type: string
nullable: true
default: null
package_image_tag:
type: string
nullable: true
default: null
package_version:
type: string
nullable: true
default: null
image_pull_dockerconfigjson:
type: object
additionalProperties: false
nullable: true
properties:
auths:
type: object
additionalProperties: false
properties:
https://registry.example.com:
type: object
additionalProperties: false
properties:
username:
type: string
default: USERNAME
password:
type: string
default: PASSWORD
auth:
type: string
default: BASE64_ENCODED_USERNAME_COLON_PASSWORD
deprecated_service_http_nodeport_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a NodePort Service with this `port` value, with port 8080 as its `targetPort`; e.g. 31234
default: null
deprecated_service_http_nodeport_nodeport:
type: integer
nullable: true
description: will be removed in a future release; the `nodePort` value of the NodePort Service, optional when `deprecated_service_http_nodeport_port` is specified; e.g. 31234
default: null
deprecated_service_http_loadbalancer_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a LoadBalancer Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443
default: null
deprecated_service_http_clusterip_port:
type: integer
nullable: true
description: will be removed in a future release; when specified, creates a ClusterIP Service with this `port` value, with port 8080 as its `targetPort`; e.g. 8443
default: null
service_https_nodeport_port:
type: integer
nullable: true
description: when specified, creates a NodePort Service with this `port` value, with port 8443 as its `targetPort`; e.g. 31243
default: null
service_https_nodeport_nodeport:
type: integer
nullable: true
description: the `nodePort` value of the NodePort Service, optional when `service_https_nodeport_port` is specified; e.g. 31243
default: null
service_https_loadbalancer_port:
type: integer
nullable: true
description: when specified, creates a LoadBalancer Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443
default: null
service_https_clusterip_port:
type: integer
nullable: true
description: when specified, creates a ClusterIP Service with this `port` value, with port 8443 as its `targetPort`; e.g. 8443
default: null
service_loadbalancer_ip:
type: string
nullable: true
description: The `loadBalancerIP` value of the LoadBalancer Service. Ignored unless service_https_loadbalancer_port is provided. e.g. 1.2.3.4
default: null
log_level:
type: string
nullable: true
description: 'Specify the verbosity of logging: info ("nice to know" information), debug (developer information), trace (timing information), or all (kitchen sink). Do not use trace or all on production systems, as credentials may get logged.'
default: null
deprecated_log_format:
type: string
nullable: true
description: 'Specify the format of logging: json (for machine parsable logs) and text (for legacy klog formatted logs). By default, when this value is left unset, logs are formatted in json. This configuration is deprecated and will be removed in a future release at which point logs will always be formatted as json.'
default: null
run_as_user:
type: integer
description: run_as_user specifies the user ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
run_as_group:
type: integer
description: run_as_group specifies the group ID that will own the process, see the Dockerfile for the reasoning behind this choice
default: 65532
api_group_suffix:
type: string
description: Specify the API group suffix for all Pinniped API groups. By default, this is set to pinniped.dev, so Pinniped API groups will look like foo.pinniped.dev, authentication.concierge.pinniped.dev, etc. As an example, if this is set to tuna.io, then Pinniped API groups will look like foo.tuna.io. authentication.concierge.tuna.io, etc.
default: pinniped.dev
https_proxy:
type: string
nullable: true
description: Set the standard golang HTTPS_PROXY and NO_PROXY environment variables on the Supervisor containers. These will be used when the Supervisor makes backend-to-backend calls to upstream identity providers using HTTPS, e.g. when the Supervisor fetches discovery documents, JWKS keys, and tokens from an upstream OIDC Provider. The Supervisor never makes insecure HTTP calls, so there is no reason to set HTTP_PROXY.
default: null
no_proxy:
type: string
description: NO_PROXY environment variable. do not proxy Kubernetes endpoints
default: $(KUBERNETES_SERVICE_HOST),169.254.169.254,127.0.0.1,localhost,.svc,.cluster.local
endpoints:
type: object
additionalProperties: false
nullable: true
description: Control the HTTP and HTTPS listeners of the Supervisor.
properties:
https:
type: object
additionalProperties: false
properties:
network:
type: string
default: tcp | unix | disabled
address:
type: string
default: host:port when network=tcp or /pinniped_socket/socketfile.sock when network=unix
deprecated_insecure_accept_external_unencrypted_http_requests:
type: boolean
description: Optionally override the validation on the endpoints.http value which checks that only loopback interfaces are used.
default: false

View File

@ -0,0 +1,37 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: "concierge-install-ns"
---
# ServiceAccount details from the file linked above
apiVersion: v1
kind: ServiceAccount
metadata:
name: "pinniped-package-rbac-concierge-sa-superadmin-dangerous"
namespace: "concierge-install-ns"
# namespace: default # --> sticking to default for everything for now.
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "pinniped-package-rbac-concierge-role-superadmin-dangerous"
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "pinniped-package-rbac-concierge-role-binding-superadmin-dangerous"
subjects:
- kind: ServiceAccount
name: "pinniped-package-rbac-concierge-sa-superadmin-dangerous"
namespace: "concierge-install-ns"
# namespace: default # --> sticking to default for everything for now.
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "pinniped-package-rbac-concierge-role-superadmin-dangerous"

View File

@ -0,0 +1,37 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: "supervisor-install-ns"
---
# ServiceAccount details from the file linked above
apiVersion: v1
kind: ServiceAccount
metadata:
name: "pinniped-package-rbac-supervisor-sa-superadmin-dangerous"
namespace: "supervisor-install-ns"
# namespace: default # --> sticking to default for everything for now.
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "pinniped-package-rbac-supervisor-role-superadmin-dangerous"
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "pinniped-package-rbac-supervisor-role-binding-superadmin-dangerous"
subjects:
- kind: ServiceAccount
name: "pinniped-package-rbac-supervisor-sa-superadmin-dangerous"
namespace: "supervisor-install-ns"
# namespace: default # --> sticking to default for everything for now.
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "pinniped-package-rbac-supervisor-role-superadmin-dangerous"

View File

@ -0,0 +1,35 @@
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
# name, does not have to be versioned, versionSelection.constraints below will handle
name: supervisor-install
namespace: supervisor-install-ns
spec:
serviceAccountName: "pinniped-package-rbac-supervisor-sa-superadmin-dangerous"
packageRef:
refName: "supervisor.pinniped.dev"
versionSelection:
constraints: "0.25.0"
values:
- secretRef:
name: "supervisor-package-install-secret"
---
apiVersion: v1
kind: Secret
metadata:
name: "supervisor-package-install-secret"
namespace: supervisor-install-ns
stringData:
values.yml: |
---
app_name: pinniped-supervisor
namespace: supervisor
api_group_suffix: pinniped.dev
image_repo: pinniped.local/test/build
image_tag: 9EA78F67-5129-45B4-96E1-597073E864F9
log_level: debug
service_https_nodeport_port: 443
service_https_nodeport_nodeport: 31243
service_https_clusterip_port: 443

View File

@ -0,0 +1,3 @@
# README
These are hand crafted files, not generated.

View File

@ -0,0 +1,60 @@
apiVersion: idp.supervisor.pinniped.dev/v1alpha1
kind: OIDCIdentityProvider
metadata:
# namespace: pinniped-supervisor
namespace: supervisor-ns # for this install this is the namespace that I've ben using.
name: gitlab
spec:
# Specify the upstream issuer URL.
issuer: https://gitlab.eng.vmware.com
# Specify how to form authorization requests to GitLab.
authorizationConfig:
# GitLab is unusual among OIDC providers in that it returns an
# error if you request the "offline_access" scope during an
# authorization flow, so ask Pinniped to avoid requesting that
# scope when using GitLab by excluding it from this list.
# By specifying only "openid" here then Pinniped will only
# request "openid".
additionalScopes: [openid,email]
# If you would also like to allow your end users to authenticate using
# a password grant, then change this to true. See
# https://docs.gitlab.com/ee/api/oauth2.html#resource-owner-password-credentials-flow
# for more information about using the password grant with GitLab.
allowPasswordGrant: false
# Specify how GitLab claims are mapped to Kubernetes identities.
claims:
# Specify the name of the claim in your GitLab token that will be mapped
# to the "username" claim in downstream tokens minted by the Supervisor.
username: email
# Specify the name of the claim in GitLab that represents the groups
# that the user belongs to. Note that GitLab's "groups" claim comes from
# their "/userinfo" endpoint, not the token.
groups: groups
# Specify the name of the Kubernetes Secret that contains your GitLab
# application's client credentials (created below).
client:
secretName: gitlab-client-credentials
---
apiVersion: v1
kind: Secret
metadata:
# namespace: pinniped-supervisor
namespace: supervisor-ns # for this install this is the namespace that I've ben using.
name: gitlab-client-credentials
type: secrets.pinniped.dev/oidc-client
stringData:
# The "Application ID" that you got from GitLab.
clientID: "bbf1c9e13b38642adec54d47a112159549c2de10ae3506086c5af2ff4beb32d6"
# The "Secret" that you got from GitLab.
clientSecret: "16a92c0fdbba5f87a7ea61d6c64a526b5fb838bf436825c98af95459c7c5eeb8"

View File

@ -137,6 +137,11 @@ sleep 5
# Test that the federation domain is working before we proceed.
echo "Fetching FederationDomain discovery info..."
echo "proxy: ${PINNIPED_TEST_PROXY}"
echo "cacert: ${root_ca_crt_path}"
echo "issuer: ${issuer}"
echo "curl via:"
echo "https_proxy='$PINNIPED_TEST_PROXY' curl -fLsS --cacert '$root_ca_crt_path' '$issuer/.well-known/openid-configuration' | jq ."
https_proxy="$PINNIPED_TEST_PROXY" curl -fLsS --cacert "$root_ca_crt_path" "$issuer/.well-known/openid-configuration" | jq .
if [[ "$use_oidc_upstream" == "yes" ]]; then