Merge branch 'main' into impersonation-proxy
This commit is contained in:
commit
e4c49c37b9
@ -3,7 +3,7 @@
|
|||||||
# Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
|
# Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
|
||||||
FROM golang:1.15.7 as build-env
|
FROM golang:1.15.8 as build-env
|
||||||
|
|
||||||
WORKDIR /work
|
WORKDIR /work
|
||||||
COPY . .
|
COPY . .
|
||||||
|
100
SECURITY.md
100
SECURITY.md
@ -1,12 +1,92 @@
|
|||||||
# Reporting a Vulnerability
|
# Security Release Process
|
||||||
|
|
||||||
Pinniped development is sponsored by VMware, and the Pinniped team encourages users
|
Pinniped provides identity services for Kubernetes clusters. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues.
|
||||||
who become aware of a security vulnerability in Pinniped to report any potential
|
|
||||||
vulnerabilities found to security@vmware.com. If possible, please include a description
|
|
||||||
of the effects of the vulnerability, reproduction steps, and a description of in which
|
|
||||||
version of Pinniped or its dependencies the vulnerability was discovered.
|
|
||||||
The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
|
|
||||||
|
|
||||||
The Pinniped team hopes that users encountering a new vulnerability will contact
|
## Supported Versions
|
||||||
us privately as it is in the best interests of our users that the Pinniped team has
|
|
||||||
an opportunity to investigate and confirm a suspected vulnerability before it becomes public knowledge.
|
As of right now, only the latest version of Pinniped is supported.
|
||||||
|
|
||||||
|
## Reporting a Vulnerability - Private Disclosure Process
|
||||||
|
|
||||||
|
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Pinniped privately, to minimize attacks against current users of Pinniped before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
|
||||||
|
|
||||||
|
If you know of a publicly disclosed security vulnerability for Pinniped, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com). The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
|
||||||
|
|
||||||
|
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
|
||||||
|
|
||||||
|
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Pinniped maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/pinniped/issues/new/choose) instead.
|
||||||
|
|
||||||
|
## Proposed Email Content
|
||||||
|
|
||||||
|
Provide a descriptive subject line and in the body of the email include the following information:
|
||||||
|
|
||||||
|
* Basic identity information, such as your name and your affiliation or company.
|
||||||
|
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
|
||||||
|
* Description of the effects of the vulnerability on Pinniped and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
|
||||||
|
* How the vulnerability affects Pinniped usage and an estimation of the attack surface, if there is one.
|
||||||
|
* List other projects or dependencies that were used in conjunction with Pinniped to produce the vulnerability.
|
||||||
|
|
||||||
|
## When to report a vulnerability
|
||||||
|
|
||||||
|
* When you think Pinniped has a potential security vulnerability.
|
||||||
|
* When you suspect a potential vulnerability but you are unsure that it impacts Pinniped.
|
||||||
|
* When you know of or suspect a potential vulnerability on another project that is used by Pinniped.
|
||||||
|
|
||||||
|
## Patch, Release, and Disclosure
|
||||||
|
|
||||||
|
The VMware Security Team will respond to vulnerability reports as follows:
|
||||||
|
|
||||||
|
1. The Security Team will investigate the vulnerability and determine its effects and criticality.
|
||||||
|
2. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection.
|
||||||
|
3. The Security Team will initiate a conversation with the reporter within 3 business days.
|
||||||
|
4. If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out.
|
||||||
|
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
|
||||||
|
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
|
||||||
|
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Pinniped team. See the section **Early Disclosure to Pinniped Distributors List** for details about how to join this mailing list.
|
||||||
|
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if it’s already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
|
||||||
|
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Pinniped, we will follow the **Public Disclosure Process**.
|
||||||
|
|
||||||
|
## Public Disclosure Process
|
||||||
|
|
||||||
|
The Security Team publishes a [public advisory](https://github.com/vmware-tanzu/pinniped/security/advisories) to the Pinniped community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Pinniped users and rolling out the patched release to affected users.
|
||||||
|
|
||||||
|
The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Pinniped instances. Pinniped distributors will handle creating and publishing their own security advisories.
|
||||||
|
|
||||||
|
## Mailing lists
|
||||||
|
|
||||||
|
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure. The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
|
||||||
|
* Join the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Pinniped distributors or vendors can apply to join this list.
|
||||||
|
|
||||||
|
## Early Disclosure to Pinniped Distributors List
|
||||||
|
|
||||||
|
The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues.
|
||||||
|
|
||||||
|
## Membership Criteria
|
||||||
|
|
||||||
|
To be eligible to join the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list, you should:
|
||||||
|
|
||||||
|
1. Be an active distributor of Pinniped.
|
||||||
|
2. Have a user base that is not limited to your own organization.
|
||||||
|
3. Have a publicly verifiable track record up to the present day of fixing security issues.
|
||||||
|
4. Not be a downstream or rebuild of another distributor.
|
||||||
|
5. Be a participant and active contributor in the Pinniped community.
|
||||||
|
6. Accept the Embargo Policy that is outlined below.
|
||||||
|
7. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution.
|
||||||
|
|
||||||
|
**The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy.**
|
||||||
|
|
||||||
|
## Embargo Policy
|
||||||
|
|
||||||
|
The information that members receive on the Pinniped Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
|
||||||
|
|
||||||
|
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
|
||||||
|
|
||||||
|
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
|
||||||
|
|
||||||
|
## Requesting to Join
|
||||||
|
|
||||||
|
Send new membership requests to https://groups.google.com/g/project-pinniped-distributors. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above.
|
||||||
|
|
||||||
|
## Confidentiality, integrity and availability
|
||||||
|
|
||||||
|
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
|
||||||
|
@ -307,8 +307,7 @@ func TestGetKubeconfig(t *testing.T) {
|
|||||||
},
|
},
|
||||||
wantError: true,
|
wantError: true,
|
||||||
wantStderr: here.Doc(`
|
wantStderr: here.Doc(`
|
||||||
Error: invalid api group suffix: 1 error(s):
|
Error: invalid api group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
||||||
- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
|
||||||
`),
|
`),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -139,8 +139,7 @@ func TestLoginOIDCCommand(t *testing.T) {
|
|||||||
},
|
},
|
||||||
wantError: true,
|
wantError: true,
|
||||||
wantStderr: here.Doc(`
|
wantStderr: here.Doc(`
|
||||||
Error: invalid concierge parameters: invalid api group suffix: 1 error(s):
|
Error: invalid concierge parameters: invalid api group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
||||||
- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
|
||||||
`),
|
`),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -143,8 +143,7 @@ func TestLoginStaticCommand(t *testing.T) {
|
|||||||
},
|
},
|
||||||
wantError: true,
|
wantError: true,
|
||||||
wantStderr: here.Doc(`
|
wantStderr: here.Doc(`
|
||||||
Error: invalid concierge parameters: invalid api group suffix: 1 error(s):
|
Error: invalid concierge parameters: invalid api group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
||||||
- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
|
|
||||||
`),
|
`),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -21,6 +21,9 @@ rules:
|
|||||||
- apiGroups: [ admissionregistration.k8s.io ]
|
- apiGroups: [ admissionregistration.k8s.io ]
|
||||||
resources: [ validatingwebhookconfigurations, mutatingwebhookconfigurations ]
|
resources: [ validatingwebhookconfigurations, mutatingwebhookconfigurations ]
|
||||||
verbs: [ get, list, watch ]
|
verbs: [ get, list, watch ]
|
||||||
|
- apiGroups: [ flowcontrol.apiserver.k8s.io ]
|
||||||
|
resources: [ flowschemas, prioritylevelconfigurations ]
|
||||||
|
verbs: [ get, list, watch ]
|
||||||
- apiGroups: [ policy ]
|
- apiGroups: [ policy ]
|
||||||
resources: [ podsecuritypolicies ]
|
resources: [ podsecuritypolicies ]
|
||||||
verbs: [ use ]
|
verbs: [ use ]
|
||||||
|
@ -51,6 +51,7 @@ function check_dependency() {
|
|||||||
help=no
|
help=no
|
||||||
skip_build=no
|
skip_build=no
|
||||||
clean_kind=no
|
clean_kind=no
|
||||||
|
api_group_suffix="pinniped.dev" # same default as in the values.yaml ytt file
|
||||||
|
|
||||||
while (("$#")); do
|
while (("$#")); do
|
||||||
case "$1" in
|
case "$1" in
|
||||||
@ -66,6 +67,16 @@ while (("$#")); do
|
|||||||
clean_kind=yes
|
clean_kind=yes
|
||||||
shift
|
shift
|
||||||
;;
|
;;
|
||||||
|
-g | --api-group-suffix)
|
||||||
|
shift
|
||||||
|
# If there are no more command line arguments, or there is another command line argument but it starts with a dash, then error
|
||||||
|
if [[ "$#" == "0" || "$1" == -* ]]; then
|
||||||
|
log_error "-g|--api-group-suffix requires a group name to be specified"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
api_group_suffix=$1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
-*)
|
-*)
|
||||||
log_error "Unsupported flag $1" >&2
|
log_error "Unsupported flag $1" >&2
|
||||||
exit 1
|
exit 1
|
||||||
@ -84,6 +95,8 @@ if [[ "$help" == "yes" ]]; then
|
|||||||
log_note
|
log_note
|
||||||
log_note "Flags:"
|
log_note "Flags:"
|
||||||
log_note " -h, --help: print this usage"
|
log_note " -h, --help: print this usage"
|
||||||
|
log_note " -c, --clean: destroy the current kind cluster and make a new one"
|
||||||
|
log_note " -g, --api-group-suffix: deploy Pinniped with an alternate API group suffix"
|
||||||
log_note " -s, --skip-build: reuse the most recently built image of the app instead of building"
|
log_note " -s, --skip-build: reuse the most recently built image of the app instead of building"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@ -226,6 +239,7 @@ if ! tilt_mode; then
|
|||||||
ytt --file . \
|
ytt --file . \
|
||||||
--data-value "app_name=$supervisor_app_name" \
|
--data-value "app_name=$supervisor_app_name" \
|
||||||
--data-value "namespace=$supervisor_namespace" \
|
--data-value "namespace=$supervisor_namespace" \
|
||||||
|
--data-value "api_group_suffix=$api_group_suffix" \
|
||||||
--data-value "image_repo=$registry_repo" \
|
--data-value "image_repo=$registry_repo" \
|
||||||
--data-value "image_tag=$tag" \
|
--data-value "image_tag=$tag" \
|
||||||
--data-value "log_level=debug" \
|
--data-value "log_level=debug" \
|
||||||
@ -259,6 +273,7 @@ if ! tilt_mode; then
|
|||||||
ytt --file . \
|
ytt --file . \
|
||||||
--data-value "app_name=$concierge_app_name" \
|
--data-value "app_name=$concierge_app_name" \
|
||||||
--data-value "namespace=$concierge_namespace" \
|
--data-value "namespace=$concierge_namespace" \
|
||||||
|
--data-value "api_group_suffix=$api_group_suffix" \
|
||||||
--data-value "log_level=debug" \
|
--data-value "log_level=debug" \
|
||||||
--data-value-yaml "custom_labels=$concierge_custom_labels" \
|
--data-value-yaml "custom_labels=$concierge_custom_labels" \
|
||||||
--data-value "image_repo=$registry_repo" \
|
--data-value "image_repo=$registry_repo" \
|
||||||
@ -314,6 +329,7 @@ export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_CALLBACK_URL=https://pinniped-supe
|
|||||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME=pinny@example.com
|
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME=pinny@example.com
|
||||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_PASSWORD=password
|
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_PASSWORD=password
|
||||||
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_EXPECTED_GROUPS= # Dex's local user store does not let us configure groups.
|
export PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_EXPECTED_GROUPS= # Dex's local user store does not let us configure groups.
|
||||||
|
export PINNIPED_TEST_API_GROUP_SUFFIX='${api_group_suffix}'
|
||||||
|
|
||||||
read -r -d '' PINNIPED_TEST_CLUSTER_CAPABILITY_YAML << PINNIPED_TEST_CLUSTER_CAPABILITY_YAML_EOF || true
|
read -r -d '' PINNIPED_TEST_CLUSTER_CAPABILITY_YAML << PINNIPED_TEST_CLUSTER_CAPABILITY_YAML_EOF || true
|
||||||
${pinniped_cluster_capability_file_content}
|
${pinniped_cluster_capability_file_content}
|
||||||
|
@ -71,7 +71,7 @@ func (c completedConfig) New() (*PinnipedServer, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
gvr := c.ExtraConfig.GroupVersion.WithResource("tokencredentialrequests")
|
gvr := c.ExtraConfig.GroupVersion.WithResource("tokencredentialrequests")
|
||||||
storage := credentialrequest.NewREST(c.ExtraConfig.Authenticator, c.ExtraConfig.Issuer)
|
storage := credentialrequest.NewREST(c.ExtraConfig.Authenticator, c.ExtraConfig.Issuer, gvr.GroupResource())
|
||||||
if err := s.GenericAPIServer.InstallAPIGroup(&genericapiserver.APIGroupInfo{
|
if err := s.GenericAPIServer.InstallAPIGroup(&genericapiserver.APIGroupInfo{
|
||||||
PrioritizedVersions: []schema.GroupVersion{gvr.GroupVersion()},
|
PrioritizedVersions: []schema.GroupVersion{gvr.GroupVersion()},
|
||||||
VersionedResourcesStorageMap: map[string]map[string]rest.Storage{gvr.Version: {gvr.Resource: storage}},
|
VersionedResourcesStorageMap: map[string]map[string]rest.Storage{gvr.Version: {gvr.Resource: storage}},
|
||||||
|
@ -245,68 +245,17 @@ func getAggregatedAPIServerConfig(
|
|||||||
startControllersPostStartHook func(context.Context),
|
startControllersPostStartHook func(context.Context),
|
||||||
apiGroupSuffix string,
|
apiGroupSuffix string,
|
||||||
) (*apiserver.Config, error) {
|
) (*apiserver.Config, error) {
|
||||||
apiGroup, ok := groupsuffix.Replace(loginv1alpha1.GroupName, apiGroupSuffix)
|
loginConciergeAPIGroup, ok := groupsuffix.Replace(loginv1alpha1.GroupName, apiGroupSuffix)
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, fmt.Errorf("cannot make api group from %s/%s", loginv1alpha1.GroupName, apiGroupSuffix)
|
return nil, fmt.Errorf("cannot make api group from %s/%s", loginv1alpha1.GroupName, apiGroupSuffix)
|
||||||
}
|
}
|
||||||
|
|
||||||
// standard set up of the server side scheme
|
scheme := getAggregatedAPIServerScheme(loginConciergeAPIGroup, apiGroupSuffix)
|
||||||
scheme := runtime.NewScheme()
|
|
||||||
codecs := serializer.NewCodecFactory(scheme)
|
codecs := serializer.NewCodecFactory(scheme)
|
||||||
|
|
||||||
utilruntime.Must(loginv1alpha1.AddToScheme(scheme))
|
defaultEtcdPathPrefix := fmt.Sprintf("/registry/%s", loginConciergeAPIGroup)
|
||||||
utilruntime.Must(loginapi.AddToScheme(scheme))
|
|
||||||
|
|
||||||
// add the options to empty v1
|
|
||||||
metav1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
|
|
||||||
|
|
||||||
unversioned := schema.GroupVersion{Group: "", Version: "v1"}
|
|
||||||
scheme.AddUnversionedTypes(unversioned,
|
|
||||||
&metav1.Status{},
|
|
||||||
&metav1.APIVersions{},
|
|
||||||
&metav1.APIGroupList{},
|
|
||||||
&metav1.APIGroup{},
|
|
||||||
&metav1.APIResourceList{},
|
|
||||||
)
|
|
||||||
|
|
||||||
// use closure to avoid mutating scheme during iteration
|
|
||||||
var addPinnipedTypeToAPIGroup []func() //nolint: prealloc // expected slice size is unknown
|
|
||||||
for gvk := range scheme.AllKnownTypes() {
|
|
||||||
gvk := gvk
|
|
||||||
|
|
||||||
if apiGroup == loginv1alpha1.GroupName {
|
|
||||||
break // bail out early if using the standard group
|
|
||||||
}
|
|
||||||
|
|
||||||
if gvk.Group != loginv1alpha1.GroupName {
|
|
||||||
continue // ignore types that are not in the aggregated API group
|
|
||||||
}
|
|
||||||
|
|
||||||
// re-register the existing type but with the new group
|
|
||||||
f := func() {
|
|
||||||
obj, err := scheme.New(gvk)
|
|
||||||
if err != nil {
|
|
||||||
panic(err) // programmer error, scheme internal code is broken
|
|
||||||
}
|
|
||||||
newGVK := schema.GroupVersionKind{
|
|
||||||
Group: apiGroup,
|
|
||||||
Version: gvk.Version,
|
|
||||||
Kind: gvk.Kind,
|
|
||||||
}
|
|
||||||
scheme.AddKnownTypeWithName(newGVK, obj)
|
|
||||||
}
|
|
||||||
|
|
||||||
addPinnipedTypeToAPIGroup = append(addPinnipedTypeToAPIGroup, f)
|
|
||||||
}
|
|
||||||
|
|
||||||
// run the closures to mutate the scheme to understand the types at the new group
|
|
||||||
for _, f := range addPinnipedTypeToAPIGroup {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
|
|
||||||
defaultEtcdPathPrefix := fmt.Sprintf("/registry/%s", apiGroup)
|
|
||||||
groupVersion := schema.GroupVersion{
|
groupVersion := schema.GroupVersion{
|
||||||
Group: apiGroup,
|
Group: loginConciergeAPIGroup,
|
||||||
Version: loginv1alpha1.SchemeGroupVersion.Version,
|
Version: loginv1alpha1.SchemeGroupVersion.Version,
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -345,3 +294,91 @@ func getAggregatedAPIServerConfig(
|
|||||||
}
|
}
|
||||||
return apiServerConfig, nil
|
return apiServerConfig, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getAggregatedAPIServerScheme(loginConciergeAPIGroup, apiGroupSuffix string) *runtime.Scheme {
|
||||||
|
// standard set up of the server side scheme
|
||||||
|
scheme := runtime.NewScheme()
|
||||||
|
|
||||||
|
// add the options to empty v1
|
||||||
|
metav1.AddToGroupVersion(scheme, metav1.Unversioned)
|
||||||
|
|
||||||
|
// nothing fancy is required if using the standard group
|
||||||
|
if loginConciergeAPIGroup == loginv1alpha1.GroupName {
|
||||||
|
utilruntime.Must(loginv1alpha1.AddToScheme(scheme))
|
||||||
|
utilruntime.Must(loginapi.AddToScheme(scheme))
|
||||||
|
return scheme
|
||||||
|
}
|
||||||
|
|
||||||
|
// we need a temporary place to register our types to avoid double registering them
|
||||||
|
tmpScheme := runtime.NewScheme()
|
||||||
|
utilruntime.Must(loginv1alpha1.AddToScheme(tmpScheme))
|
||||||
|
utilruntime.Must(loginapi.AddToScheme(tmpScheme))
|
||||||
|
|
||||||
|
for gvk := range tmpScheme.AllKnownTypes() {
|
||||||
|
if gvk.GroupVersion() == metav1.Unversioned {
|
||||||
|
continue // metav1.AddToGroupVersion registers types outside of our aggregated API group that we need to ignore
|
||||||
|
}
|
||||||
|
|
||||||
|
if gvk.Group != loginv1alpha1.GroupName {
|
||||||
|
panic("tmp scheme has types not in the aggregated API group: " + gvk.Group) // programmer error
|
||||||
|
}
|
||||||
|
|
||||||
|
obj, err := tmpScheme.New(gvk)
|
||||||
|
if err != nil {
|
||||||
|
panic(err) // programmer error, scheme internal code is broken
|
||||||
|
}
|
||||||
|
newGVK := schema.GroupVersionKind{
|
||||||
|
Group: loginConciergeAPIGroup,
|
||||||
|
Version: gvk.Version,
|
||||||
|
Kind: gvk.Kind,
|
||||||
|
}
|
||||||
|
|
||||||
|
// register the existing type but with the new group in the correct scheme
|
||||||
|
scheme.AddKnownTypeWithName(newGVK, obj)
|
||||||
|
}
|
||||||
|
|
||||||
|
// manually register conversions and defaulting into the correct scheme since we cannot directly call loginv1alpha1.AddToScheme
|
||||||
|
utilruntime.Must(loginv1alpha1.RegisterConversions(scheme))
|
||||||
|
utilruntime.Must(loginv1alpha1.RegisterDefaults(scheme))
|
||||||
|
|
||||||
|
// we do not want to return errors from the scheme and instead would prefer to defer
|
||||||
|
// to the REST storage layer for consistency. The simplest way to do this is to force
|
||||||
|
// a cache miss from the authenticator cache. Kube API groups are validated via the
|
||||||
|
// IsDNS1123Subdomain func thus we can easily create a group that is guaranteed never
|
||||||
|
// to be in the authenticator cache. Add a timestamp just to be extra sure.
|
||||||
|
const authenticatorCacheMissPrefix = "_INVALID_API_GROUP_"
|
||||||
|
authenticatorCacheMiss := authenticatorCacheMissPrefix + time.Now().UTC().String()
|
||||||
|
|
||||||
|
// we do not have any defaulting functions for *loginv1alpha1.TokenCredentialRequest
|
||||||
|
// today, but we may have some in the future. Calling AddTypeDefaultingFunc overwrites
|
||||||
|
// any previously registered defaulting function. Thus to make sure that we catch
|
||||||
|
// a situation where we add a defaulting func, we attempt to call it here with a nil
|
||||||
|
// *loginv1alpha1.TokenCredentialRequest. This will do nothing when there is no
|
||||||
|
// defaulting func registered, but it will almost certainly panic if one is added.
|
||||||
|
scheme.Default((*loginv1alpha1.TokenCredentialRequest)(nil))
|
||||||
|
|
||||||
|
// on incoming requests, restore the authenticator API group to the standard group
|
||||||
|
// note that we are responsible for duplicating this logic for every external API version
|
||||||
|
scheme.AddTypeDefaultingFunc(&loginv1alpha1.TokenCredentialRequest{}, func(obj interface{}) {
|
||||||
|
credentialRequest := obj.(*loginv1alpha1.TokenCredentialRequest)
|
||||||
|
|
||||||
|
if credentialRequest.Spec.Authenticator.APIGroup == nil {
|
||||||
|
// force a cache miss because this is an invalid request
|
||||||
|
plog.Debug("invalid token credential request, nil group", "authenticator", credentialRequest.Spec.Authenticator)
|
||||||
|
credentialRequest.Spec.Authenticator.APIGroup = &authenticatorCacheMiss
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
restoredGroup, ok := groupsuffix.Unreplace(*credentialRequest.Spec.Authenticator.APIGroup, apiGroupSuffix)
|
||||||
|
if !ok {
|
||||||
|
// force a cache miss because this is an invalid request
|
||||||
|
plog.Debug("invalid token credential request, wrong group", "authenticator", credentialRequest.Spec.Authenticator)
|
||||||
|
credentialRequest.Spec.Authenticator.APIGroup = &authenticatorCacheMiss
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
credentialRequest.Spec.Authenticator.APIGroup = &restoredGroup
|
||||||
|
})
|
||||||
|
|
||||||
|
return scheme
|
||||||
|
}
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
// Copyright 2020 the Pinniped contributors. All Rights Reserved.
|
// Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
|
||||||
// SPDX-License-Identifier: Apache-2.0
|
// SPDX-License-Identifier: Apache-2.0
|
||||||
|
|
||||||
package server
|
package server
|
||||||
@ -6,12 +6,21 @@ package server
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"reflect"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
"github.com/google/go-cmp/cmp"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
corev1 "k8s.io/api/core/v1"
|
||||||
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
"k8s.io/apimachinery/pkg/runtime"
|
||||||
|
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||||
|
|
||||||
|
loginapi "go.pinniped.dev/generated/1.20/apis/concierge/login"
|
||||||
|
loginv1alpha1 "go.pinniped.dev/generated/1.20/apis/concierge/login/v1alpha1"
|
||||||
|
"go.pinniped.dev/internal/groupsuffix"
|
||||||
)
|
)
|
||||||
|
|
||||||
const knownGoodUsage = `
|
const knownGoodUsage = `
|
||||||
@ -88,3 +97,167 @@ func TestCommand(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func Test_getAggregatedAPIServerScheme(t *testing.T) {
|
||||||
|
// the standard group
|
||||||
|
regularGV := schema.GroupVersion{
|
||||||
|
Group: "login.concierge.pinniped.dev",
|
||||||
|
Version: "v1alpha1",
|
||||||
|
}
|
||||||
|
regularGVInternal := schema.GroupVersion{
|
||||||
|
Group: "login.concierge.pinniped.dev",
|
||||||
|
Version: runtime.APIVersionInternal,
|
||||||
|
}
|
||||||
|
|
||||||
|
// the canonical other group
|
||||||
|
otherGV := schema.GroupVersion{
|
||||||
|
Group: "login.concierge.walrus.tld",
|
||||||
|
Version: "v1alpha1",
|
||||||
|
}
|
||||||
|
otherGVInternal := schema.GroupVersion{
|
||||||
|
Group: "login.concierge.walrus.tld",
|
||||||
|
Version: runtime.APIVersionInternal,
|
||||||
|
}
|
||||||
|
|
||||||
|
// kube's core internal
|
||||||
|
internalGV := schema.GroupVersion{
|
||||||
|
Group: "",
|
||||||
|
Version: runtime.APIVersionInternal,
|
||||||
|
}
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
apiGroupSuffix string
|
||||||
|
want map[schema.GroupVersionKind]reflect.Type
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "regular api group",
|
||||||
|
apiGroupSuffix: "pinniped.dev",
|
||||||
|
want: map[schema.GroupVersionKind]reflect.Type{
|
||||||
|
// all the types that are in the aggregated API group
|
||||||
|
|
||||||
|
regularGV.WithKind("TokenCredentialRequest"): reflect.TypeOf(&loginv1alpha1.TokenCredentialRequest{}).Elem(),
|
||||||
|
regularGV.WithKind("TokenCredentialRequestList"): reflect.TypeOf(&loginv1alpha1.TokenCredentialRequestList{}).Elem(),
|
||||||
|
|
||||||
|
regularGVInternal.WithKind("TokenCredentialRequest"): reflect.TypeOf(&loginapi.TokenCredentialRequest{}).Elem(),
|
||||||
|
regularGVInternal.WithKind("TokenCredentialRequestList"): reflect.TypeOf(&loginapi.TokenCredentialRequestList{}).Elem(),
|
||||||
|
|
||||||
|
regularGV.WithKind("CreateOptions"): reflect.TypeOf(&metav1.CreateOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("DeleteOptions"): reflect.TypeOf(&metav1.DeleteOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("ExportOptions"): reflect.TypeOf(&metav1.ExportOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("GetOptions"): reflect.TypeOf(&metav1.GetOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("ListOptions"): reflect.TypeOf(&metav1.ListOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("PatchOptions"): reflect.TypeOf(&metav1.PatchOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("UpdateOptions"): reflect.TypeOf(&metav1.UpdateOptions{}).Elem(),
|
||||||
|
regularGV.WithKind("WatchEvent"): reflect.TypeOf(&metav1.WatchEvent{}).Elem(),
|
||||||
|
|
||||||
|
regularGVInternal.WithKind("WatchEvent"): reflect.TypeOf(&metav1.InternalEvent{}).Elem(),
|
||||||
|
|
||||||
|
// the types below this line do not really matter to us because they are in the core group
|
||||||
|
|
||||||
|
internalGV.WithKind("WatchEvent"): reflect.TypeOf(&metav1.InternalEvent{}).Elem(),
|
||||||
|
|
||||||
|
metav1.Unversioned.WithKind("APIGroup"): reflect.TypeOf(&metav1.APIGroup{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIGroupList"): reflect.TypeOf(&metav1.APIGroupList{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIResourceList"): reflect.TypeOf(&metav1.APIResourceList{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIVersions"): reflect.TypeOf(&metav1.APIVersions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("CreateOptions"): reflect.TypeOf(&metav1.CreateOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("DeleteOptions"): reflect.TypeOf(&metav1.DeleteOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("ExportOptions"): reflect.TypeOf(&metav1.ExportOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("GetOptions"): reflect.TypeOf(&metav1.GetOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("ListOptions"): reflect.TypeOf(&metav1.ListOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("PatchOptions"): reflect.TypeOf(&metav1.PatchOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("Status"): reflect.TypeOf(&metav1.Status{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("UpdateOptions"): reflect.TypeOf(&metav1.UpdateOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("WatchEvent"): reflect.TypeOf(&metav1.WatchEvent{}).Elem(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "other api group",
|
||||||
|
apiGroupSuffix: "walrus.tld",
|
||||||
|
want: map[schema.GroupVersionKind]reflect.Type{
|
||||||
|
// all the types that are in the aggregated API group
|
||||||
|
|
||||||
|
otherGV.WithKind("TokenCredentialRequest"): reflect.TypeOf(&loginv1alpha1.TokenCredentialRequest{}).Elem(),
|
||||||
|
otherGV.WithKind("TokenCredentialRequestList"): reflect.TypeOf(&loginv1alpha1.TokenCredentialRequestList{}).Elem(),
|
||||||
|
|
||||||
|
otherGVInternal.WithKind("TokenCredentialRequest"): reflect.TypeOf(&loginapi.TokenCredentialRequest{}).Elem(),
|
||||||
|
otherGVInternal.WithKind("TokenCredentialRequestList"): reflect.TypeOf(&loginapi.TokenCredentialRequestList{}).Elem(),
|
||||||
|
|
||||||
|
otherGV.WithKind("CreateOptions"): reflect.TypeOf(&metav1.CreateOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("DeleteOptions"): reflect.TypeOf(&metav1.DeleteOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("ExportOptions"): reflect.TypeOf(&metav1.ExportOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("GetOptions"): reflect.TypeOf(&metav1.GetOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("ListOptions"): reflect.TypeOf(&metav1.ListOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("PatchOptions"): reflect.TypeOf(&metav1.PatchOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("UpdateOptions"): reflect.TypeOf(&metav1.UpdateOptions{}).Elem(),
|
||||||
|
otherGV.WithKind("WatchEvent"): reflect.TypeOf(&metav1.WatchEvent{}).Elem(),
|
||||||
|
|
||||||
|
otherGVInternal.WithKind("WatchEvent"): reflect.TypeOf(&metav1.InternalEvent{}).Elem(),
|
||||||
|
|
||||||
|
// the types below this line do not really matter to us because they are in the core group
|
||||||
|
|
||||||
|
internalGV.WithKind("WatchEvent"): reflect.TypeOf(&metav1.InternalEvent{}).Elem(),
|
||||||
|
|
||||||
|
metav1.Unversioned.WithKind("APIGroup"): reflect.TypeOf(&metav1.APIGroup{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIGroupList"): reflect.TypeOf(&metav1.APIGroupList{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIResourceList"): reflect.TypeOf(&metav1.APIResourceList{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("APIVersions"): reflect.TypeOf(&metav1.APIVersions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("CreateOptions"): reflect.TypeOf(&metav1.CreateOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("DeleteOptions"): reflect.TypeOf(&metav1.DeleteOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("ExportOptions"): reflect.TypeOf(&metav1.ExportOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("GetOptions"): reflect.TypeOf(&metav1.GetOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("ListOptions"): reflect.TypeOf(&metav1.ListOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("PatchOptions"): reflect.TypeOf(&metav1.PatchOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("Status"): reflect.TypeOf(&metav1.Status{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("UpdateOptions"): reflect.TypeOf(&metav1.UpdateOptions{}).Elem(),
|
||||||
|
metav1.Unversioned.WithKind("WatchEvent"): reflect.TypeOf(&metav1.WatchEvent{}).Elem(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
tt := tt
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
loginConciergeAPIGroup, ok := groupsuffix.Replace("login.concierge.pinniped.dev", tt.apiGroupSuffix)
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
scheme := getAggregatedAPIServerScheme(loginConciergeAPIGroup, tt.apiGroupSuffix)
|
||||||
|
require.Equal(t, tt.want, scheme.AllKnownTypes())
|
||||||
|
|
||||||
|
// make a credential request like a client would send
|
||||||
|
authenticationConciergeAPIGroup := "authentication.concierge." + tt.apiGroupSuffix
|
||||||
|
credentialRequest := &loginv1alpha1.TokenCredentialRequest{
|
||||||
|
Spec: loginv1alpha1.TokenCredentialRequestSpec{
|
||||||
|
Authenticator: corev1.TypedLocalObjectReference{
|
||||||
|
APIGroup: &authenticationConciergeAPIGroup,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// run defaulting on it
|
||||||
|
scheme.Default(credentialRequest)
|
||||||
|
|
||||||
|
// make sure the group is restored if needed
|
||||||
|
require.Equal(t, "authentication.concierge.pinniped.dev", *credentialRequest.Spec.Authenticator.APIGroup)
|
||||||
|
|
||||||
|
// make a credential request in the standard group
|
||||||
|
defaultAuthenticationConciergeAPIGroup := "authentication.concierge.pinniped.dev"
|
||||||
|
defaultCredentialRequest := &loginv1alpha1.TokenCredentialRequest{
|
||||||
|
Spec: loginv1alpha1.TokenCredentialRequestSpec{
|
||||||
|
Authenticator: corev1.TypedLocalObjectReference{
|
||||||
|
APIGroup: &defaultAuthenticationConciergeAPIGroup,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// run defaulting on it
|
||||||
|
scheme.Default(defaultCredentialRequest)
|
||||||
|
|
||||||
|
if tt.apiGroupSuffix == "pinniped.dev" { // when using the standard group, this should just work
|
||||||
|
require.Equal(t, "authentication.concierge.pinniped.dev", *defaultCredentialRequest.Spec.Authenticator.APIGroup)
|
||||||
|
} else { // when using any other group, this should always be a cache miss
|
||||||
|
require.True(t, strings.HasPrefix(*defaultCredentialRequest.Spec.Authenticator.APIGroup, "_INVALID_API_GROUP_2"))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -197,7 +197,7 @@ func TestFromPath(t *testing.T) {
|
|||||||
credentialIssuer: pinniped-config
|
credentialIssuer: pinniped-config
|
||||||
apiService: pinniped-api
|
apiService: pinniped-api
|
||||||
`),
|
`),
|
||||||
wantError: "validate apiGroupSuffix: 1 error(s):\n- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
wantError: "validate apiGroupSuffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
|
@ -72,7 +72,7 @@ func TestFromPath(t *testing.T) {
|
|||||||
names:
|
names:
|
||||||
defaultTLSCertificateSecret: my-secret-name
|
defaultTLSCertificateSecret: my-secret-name
|
||||||
`),
|
`),
|
||||||
wantError: "validate apiGroupSuffix: 1 error(s):\n- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
wantError: "validate apiGroupSuffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
|
@ -6,7 +6,6 @@ package authncache
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
"sort"
|
"sort"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
@ -15,13 +14,12 @@ import (
|
|||||||
"k8s.io/klog/v2"
|
"k8s.io/klog/v2"
|
||||||
|
|
||||||
loginapi "go.pinniped.dev/generated/1.20/apis/concierge/login"
|
loginapi "go.pinniped.dev/generated/1.20/apis/concierge/login"
|
||||||
|
"go.pinniped.dev/internal/constable"
|
||||||
"go.pinniped.dev/internal/plog"
|
"go.pinniped.dev/internal/plog"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
|
||||||
// ErrNoSuchAuthenticator is returned by Cache.AuthenticateTokenCredentialRequest() when the requested authenticator is not configured.
|
// ErrNoSuchAuthenticator is returned by Cache.AuthenticateTokenCredentialRequest() when the requested authenticator is not configured.
|
||||||
ErrNoSuchAuthenticator = fmt.Errorf("no such authenticator")
|
const ErrNoSuchAuthenticator = constable.Error("no such authenticator")
|
||||||
)
|
|
||||||
|
|
||||||
// Cache implements the authenticator.Token interface by multiplexing across a dynamic set of authenticators
|
// Cache implements the authenticator.Token interface by multiplexing across a dynamic set of authenticators
|
||||||
// loaded from authenticator resources.
|
// loaded from authenticator resources.
|
||||||
|
@ -12,6 +12,7 @@ import (
|
|||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/labels"
|
"k8s.io/apimachinery/pkg/labels"
|
||||||
"k8s.io/apimachinery/pkg/util/clock"
|
"k8s.io/apimachinery/pkg/util/clock"
|
||||||
|
"k8s.io/apimachinery/pkg/util/errors"
|
||||||
"k8s.io/client-go/util/retry"
|
"k8s.io/client-go/util/retry"
|
||||||
"k8s.io/klog/v2"
|
"k8s.io/klog/v2"
|
||||||
|
|
||||||
@ -20,7 +21,6 @@ import (
|
|||||||
configinformers "go.pinniped.dev/generated/1.20/client/supervisor/informers/externalversions/config/v1alpha1"
|
configinformers "go.pinniped.dev/generated/1.20/client/supervisor/informers/externalversions/config/v1alpha1"
|
||||||
pinnipedcontroller "go.pinniped.dev/internal/controller"
|
pinnipedcontroller "go.pinniped.dev/internal/controller"
|
||||||
"go.pinniped.dev/internal/controllerlib"
|
"go.pinniped.dev/internal/controllerlib"
|
||||||
"go.pinniped.dev/internal/multierror"
|
|
||||||
"go.pinniped.dev/internal/oidc/provider"
|
"go.pinniped.dev/internal/oidc/provider"
|
||||||
"go.pinniped.dev/internal/plog"
|
"go.pinniped.dev/internal/plog"
|
||||||
)
|
)
|
||||||
@ -107,7 +107,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
errs := multierror.New()
|
var errs []error
|
||||||
|
|
||||||
federationDomainIssuers := make([]*provider.FederationDomainIssuer, 0)
|
federationDomainIssuers := make([]*provider.FederationDomainIssuer, 0)
|
||||||
for _, federationDomain := range federationDomains {
|
for _, federationDomain := range federationDomains {
|
||||||
@ -123,7 +123,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
configv1alpha1.DuplicateFederationDomainStatusCondition,
|
configv1alpha1.DuplicateFederationDomainStatusCondition,
|
||||||
"Duplicate issuer: "+federationDomain.Spec.Issuer,
|
"Duplicate issuer: "+federationDomain.Spec.Issuer,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
errs.Add(fmt.Errorf("could not update status: %w", err))
|
errs = append(errs, fmt.Errorf("could not update status: %w", err))
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@ -138,7 +138,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
configv1alpha1.SameIssuerHostMustUseSameSecretFederationDomainStatusCondition,
|
configv1alpha1.SameIssuerHostMustUseSameSecretFederationDomainStatusCondition,
|
||||||
"Issuers with the same DNS hostname (address not including port) must use the same secretName: "+issuerURLToHostnameKey(issuerURL),
|
"Issuers with the same DNS hostname (address not including port) must use the same secretName: "+issuerURLToHostnameKey(issuerURL),
|
||||||
); err != nil {
|
); err != nil {
|
||||||
errs.Add(fmt.Errorf("could not update status: %w", err))
|
errs = append(errs, fmt.Errorf("could not update status: %w", err))
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@ -152,7 +152,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
configv1alpha1.InvalidFederationDomainStatusCondition,
|
configv1alpha1.InvalidFederationDomainStatusCondition,
|
||||||
"Invalid: "+err.Error(),
|
"Invalid: "+err.Error(),
|
||||||
); err != nil {
|
); err != nil {
|
||||||
errs.Add(fmt.Errorf("could not update status: %w", err))
|
errs = append(errs, fmt.Errorf("could not update status: %w", err))
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@ -164,7 +164,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
configv1alpha1.SuccessFederationDomainStatusCondition,
|
configv1alpha1.SuccessFederationDomainStatusCondition,
|
||||||
"Provider successfully created",
|
"Provider successfully created",
|
||||||
); err != nil {
|
); err != nil {
|
||||||
errs.Add(fmt.Errorf("could not update status: %w", err))
|
errs = append(errs, fmt.Errorf("could not update status: %w", err))
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -173,7 +173,7 @@ func (c *federationDomainWatcherController) Sync(ctx controllerlib.Context) erro
|
|||||||
|
|
||||||
c.providerSetter.SetProviders(federationDomainIssuers...)
|
c.providerSetter.SetProviders(federationDomainIssuers...)
|
||||||
|
|
||||||
return errs.ErrOrNil()
|
return errors.NewAggregate(errs)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *federationDomainWatcherController) updateStatus(
|
func (c *federationDomainWatcherController) updateStatus(
|
||||||
|
@ -6,6 +6,7 @@ package supervisorconfig
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
"net/url"
|
"net/url"
|
||||||
"reflect"
|
"reflect"
|
||||||
"sync"
|
"sync"
|
||||||
@ -320,7 +321,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("sets the provider that it could actually update in the API", func() {
|
it("sets the provider that it could actually update in the API", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: some update error")
|
r.EqualError(err, "could not update status: some update error")
|
||||||
|
|
||||||
provider1, err := provider.NewFederationDomainIssuer(federationDomain1.Spec.Issuer)
|
provider1, err := provider.NewFederationDomainIssuer(federationDomain1.Spec.Issuer)
|
||||||
r.NoError(err)
|
r.NoError(err)
|
||||||
@ -339,7 +340,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("returns an error", func() {
|
it("returns an error", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: some update error")
|
r.EqualError(err, "could not update status: some update error")
|
||||||
|
|
||||||
federationDomain1.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
federationDomain1.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
||||||
federationDomain1.Status.Message = "Provider successfully created"
|
federationDomain1.Status.Message = "Provider successfully created"
|
||||||
@ -455,7 +456,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("returns an error", func() {
|
it("returns an error", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: some update error")
|
r.EqualError(err, "could not update status: some update error")
|
||||||
|
|
||||||
federationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
federationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
||||||
federationDomain.Status.Message = "Provider successfully created"
|
federationDomain.Status.Message = "Provider successfully created"
|
||||||
@ -491,7 +492,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("returns the get error", func() {
|
it("returns the get error", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: get failed: some get error")
|
r.EqualError(err, "could not update status: get failed: some get error")
|
||||||
|
|
||||||
federationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
federationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
||||||
federationDomain.Status.Message = "Provider successfully created"
|
federationDomain.Status.Message = "Provider successfully created"
|
||||||
@ -606,7 +607,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("sets the provider that it could actually update in the API", func() {
|
it("sets the provider that it could actually update in the API", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: some update error")
|
r.EqualError(err, "could not update status: some update error")
|
||||||
|
|
||||||
validProvider, err := provider.NewFederationDomainIssuer(validFederationDomain.Spec.Issuer)
|
validProvider, err := provider.NewFederationDomainIssuer(validFederationDomain.Spec.Issuer)
|
||||||
r.NoError(err)
|
r.NoError(err)
|
||||||
@ -623,7 +624,7 @@ func TestSync(t *testing.T) {
|
|||||||
it("returns an error", func() {
|
it("returns an error", func() {
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, "1 error(s):\n- could not update status: some update error")
|
r.EqualError(err, "could not update status: some update error")
|
||||||
|
|
||||||
validFederationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
validFederationDomain.Status.Status = v1alpha1.SuccessFederationDomainStatusCondition
|
||||||
validFederationDomain.Status.Message = "Provider successfully created"
|
validFederationDomain.Status.Message = "Provider successfully created"
|
||||||
@ -761,22 +762,20 @@ func TestSync(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
when("we cannot talk to the API", func() {
|
when("we cannot talk to the API", func() {
|
||||||
|
var count int
|
||||||
it.Before(func() {
|
it.Before(func() {
|
||||||
pinnipedAPIClient.PrependReactor(
|
pinnipedAPIClient.PrependReactor(
|
||||||
"get",
|
"get",
|
||||||
"federationdomains",
|
"federationdomains",
|
||||||
func(_ coretesting.Action) (bool, runtime.Object, error) {
|
func(_ coretesting.Action) (bool, runtime.Object, error) {
|
||||||
return true, nil, errors.New("some get error")
|
count++
|
||||||
|
return true, nil, fmt.Errorf("some get error %d", count)
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
})
|
})
|
||||||
|
|
||||||
it("returns the get errors", func() {
|
it("returns the get errors", func() {
|
||||||
expectedError := here.Doc(`
|
expectedError := here.Doc(`[could not update status: get failed: some get error 1, could not update status: get failed: some get error 2, could not update status: get failed: some get error 3]`)
|
||||||
3 error(s):
|
|
||||||
- could not update status: get failed: some get error
|
|
||||||
- could not update status: get failed: some get error
|
|
||||||
- could not update status: get failed: some get error`)
|
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, expectedError)
|
r.EqualError(err, expectedError)
|
||||||
@ -947,23 +946,20 @@ func TestSync(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
when("we cannot talk to the API", func() {
|
when("we cannot talk to the API", func() {
|
||||||
|
var count int
|
||||||
it.Before(func() {
|
it.Before(func() {
|
||||||
pinnipedAPIClient.PrependReactor(
|
pinnipedAPIClient.PrependReactor(
|
||||||
"get",
|
"get",
|
||||||
"federationdomains",
|
"federationdomains",
|
||||||
func(_ coretesting.Action) (bool, runtime.Object, error) {
|
func(_ coretesting.Action) (bool, runtime.Object, error) {
|
||||||
return true, nil, errors.New("some get error")
|
count++
|
||||||
|
return true, nil, fmt.Errorf("some get error %d", count)
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
})
|
})
|
||||||
|
|
||||||
it("returns the get errors", func() {
|
it("returns the get errors", func() {
|
||||||
expectedError := here.Doc(`
|
expectedError := here.Doc(`[could not update status: get failed: some get error 1, could not update status: get failed: some get error 2, could not update status: get failed: some get error 3, could not update status: get failed: some get error 4]`)
|
||||||
4 error(s):
|
|
||||||
- could not update status: get failed: some get error
|
|
||||||
- could not update status: get failed: some get error
|
|
||||||
- could not update status: get failed: some get error
|
|
||||||
- could not update status: get failed: some get error`)
|
|
||||||
startInformersAndController()
|
startInformersAndController()
|
||||||
err := controllerlib.TestSync(t, subject, *syncContext)
|
err := controllerlib.TestSync(t, subject, *syncContext)
|
||||||
r.EqualError(err, expectedError)
|
r.EqualError(err, expectedError)
|
||||||
|
@ -9,11 +9,11 @@ import (
|
|||||||
|
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||||
|
"k8s.io/apimachinery/pkg/util/errors"
|
||||||
"k8s.io/apimachinery/pkg/util/validation"
|
"k8s.io/apimachinery/pkg/util/validation"
|
||||||
|
|
||||||
"go.pinniped.dev/internal/constable"
|
"go.pinniped.dev/internal/constable"
|
||||||
"go.pinniped.dev/internal/kubeclient"
|
"go.pinniped.dev/internal/kubeclient"
|
||||||
"go.pinniped.dev/internal/multierror"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -63,7 +63,7 @@ func New(apiGroupSuffix string) kubeclient.Middleware {
|
|||||||
|
|
||||||
kubeclient.MiddlewareFunc(func(_ context.Context, rt kubeclient.RoundTrip) {
|
kubeclient.MiddlewareFunc(func(_ context.Context, rt kubeclient.RoundTrip) {
|
||||||
// always unreplace owner refs with apiGroupSuffix because we can consume those objects across all verbs
|
// always unreplace owner refs with apiGroupSuffix because we can consume those objects across all verbs
|
||||||
rt.MutateResponse(mutateOwnerRefs(unreplace, apiGroupSuffix))
|
rt.MutateResponse(mutateOwnerRefs(Unreplace, apiGroupSuffix))
|
||||||
}),
|
}),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -115,7 +115,8 @@ func Replace(baseAPIGroup, apiGroupSuffix string) (string, bool) {
|
|||||||
return strings.TrimSuffix(baseAPIGroup, pinnipedDefaultSuffix) + apiGroupSuffix, true
|
return strings.TrimSuffix(baseAPIGroup, pinnipedDefaultSuffix) + apiGroupSuffix, true
|
||||||
}
|
}
|
||||||
|
|
||||||
func unreplace(baseAPIGroup, apiGroupSuffix string) (string, bool) {
|
// Unreplace is like performing an undo of Replace().
|
||||||
|
func Unreplace(baseAPIGroup, apiGroupSuffix string) (string, bool) {
|
||||||
if !strings.HasSuffix(baseAPIGroup, "."+apiGroupSuffix) {
|
if !strings.HasSuffix(baseAPIGroup, "."+apiGroupSuffix) {
|
||||||
return "", false
|
return "", false
|
||||||
}
|
}
|
||||||
@ -126,17 +127,17 @@ func unreplace(baseAPIGroup, apiGroupSuffix string) (string, bool) {
|
|||||||
// makes sure that the provided apiGroupSuffix is a valid DNS-1123 subdomain with at least one dot,
|
// makes sure that the provided apiGroupSuffix is a valid DNS-1123 subdomain with at least one dot,
|
||||||
// to match Kubernetes behavior.
|
// to match Kubernetes behavior.
|
||||||
func Validate(apiGroupSuffix string) error {
|
func Validate(apiGroupSuffix string) error {
|
||||||
err := multierror.New()
|
var errs []error //nolint: prealloc
|
||||||
|
|
||||||
if len(strings.Split(apiGroupSuffix, ".")) < 2 {
|
if len(strings.Split(apiGroupSuffix, ".")) < 2 {
|
||||||
err.Add(constable.Error("must contain '.'"))
|
errs = append(errs, constable.Error("must contain '.'"))
|
||||||
}
|
}
|
||||||
|
|
||||||
errorStrings := validation.IsDNS1123Subdomain(apiGroupSuffix)
|
errorStrings := validation.IsDNS1123Subdomain(apiGroupSuffix)
|
||||||
for _, errorString := range errorStrings {
|
for _, errorString := range errorStrings {
|
||||||
errorString := errorString
|
errorString := errorString
|
||||||
err.Add(constable.Error(errorString))
|
errs = append(errs, constable.Error(errorString))
|
||||||
}
|
}
|
||||||
|
|
||||||
return err.ErrOrNil()
|
return errors.NewAggregate(errs)
|
||||||
}
|
}
|
||||||
|
@ -441,10 +441,12 @@ func TestMiddlware(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestReplaceError(t *testing.T) {
|
func TestReplaceError(t *testing.T) {
|
||||||
_, ok := Replace("bad-suffix-that-doesnt-end-in-pinniped-dot-dev", "shouldnt-matter.com")
|
s, ok := Replace("bad-suffix-that-doesnt-end-in-pinniped-dot-dev", "shouldnt-matter.com")
|
||||||
|
require.Equal(t, "", s)
|
||||||
require.False(t, ok)
|
require.False(t, ok)
|
||||||
|
|
||||||
_, ok = Replace("bad-suffix-that-end-in.prefixed-pinniped.dev", "shouldnt-matter.com")
|
s, ok = Replace("bad-suffix-that-end-in.prefixed-pinniped.dev", "shouldnt-matter.com")
|
||||||
|
require.Equal(t, "", s)
|
||||||
require.False(t, ok)
|
require.False(t, ok)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -452,6 +454,27 @@ func TestReplaceSuffix(t *testing.T) {
|
|||||||
s, ok := Replace("something.pinniped.dev.something-else.pinniped.dev", "tuna.io")
|
s, ok := Replace("something.pinniped.dev.something-else.pinniped.dev", "tuna.io")
|
||||||
require.Equal(t, "something.pinniped.dev.something-else.tuna.io", s)
|
require.Equal(t, "something.pinniped.dev.something-else.tuna.io", s)
|
||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
|
|
||||||
|
// When the replace wasn't actually needed, it still returns true.
|
||||||
|
s, ok = Unreplace("something.pinniped.dev", "pinniped.dev")
|
||||||
|
require.Equal(t, "something.pinniped.dev", s)
|
||||||
|
require.True(t, ok)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUnreplaceSuffix(t *testing.T) {
|
||||||
|
s, ok := Unreplace("something.pinniped.dev.something-else.tuna.io", "tuna.io")
|
||||||
|
require.Equal(t, "something.pinniped.dev.something-else.pinniped.dev", s)
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
// When the unreplace wasn't actually needed, it still returns true.
|
||||||
|
s, ok = Unreplace("something.pinniped.dev", "pinniped.dev")
|
||||||
|
require.Equal(t, "something.pinniped.dev", s)
|
||||||
|
require.True(t, ok)
|
||||||
|
|
||||||
|
// When the unreplace was needed but did not work, return false.
|
||||||
|
s, ok = Unreplace("something.pinniped.dev.something-else.tuna.io", "salmon.io")
|
||||||
|
require.Equal(t, "", s)
|
||||||
|
require.False(t, ok)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestValidate(t *testing.T) {
|
func TestValidate(t *testing.T) {
|
||||||
@ -464,19 +487,19 @@ func TestValidate(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
apiGroupSuffix: "no-dots",
|
apiGroupSuffix: "no-dots",
|
||||||
wantErrorPrefix: "1 error(s):\n- must contain '.'",
|
wantErrorPrefix: "must contain '.'",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
apiGroupSuffix: ".starts.with.dot",
|
apiGroupSuffix: ".starts.with.dot",
|
||||||
wantErrorPrefix: "1 error(s):\n- a lowercase RFC 1123 subdomain must consist",
|
wantErrorPrefix: "a lowercase RFC 1123 subdomain must consist",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
apiGroupSuffix: "ends.with.dot.",
|
apiGroupSuffix: "ends.with.dot.",
|
||||||
wantErrorPrefix: "1 error(s):\n- a lowercase RFC 1123 subdomain must consist",
|
wantErrorPrefix: "a lowercase RFC 1123 subdomain must consist",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
apiGroupSuffix: ".multiple-issues.because-this-string-is-longer-than-the-253-character-limit-of-a-dns-1123-identifier-" + chars(253),
|
apiGroupSuffix: ".multiple-issues.because-this-string-is-longer-than-the-253-character-limit-of-a-dns-1123-identifier-" + chars(253),
|
||||||
wantErrorPrefix: "2 error(s):\n- must be no more than 253 characters\n- a lowercase RFC 1123 subdomain must consist",
|
wantErrorPrefix: "[must be no more than 253 characters, a lowercase RFC 1123 subdomain must consist",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
|
@ -1,63 +0,0 @@
|
|||||||
// Copyright 2020 the Pinniped contributors. All Rights Reserved.
|
|
||||||
// SPDX-License-Identifier: Apache-2.0
|
|
||||||
|
|
||||||
// Package multierror provides a type that can translate multiple errors into a Go error interface.
|
|
||||||
//
|
|
||||||
// A common use of this package is as follows.
|
|
||||||
// errs := multierror.New()
|
|
||||||
// for i := range stuff {
|
|
||||||
// err := doThing(i)
|
|
||||||
// errs.Add(err)
|
|
||||||
// }
|
|
||||||
// return errs.ErrOrNil()
|
|
||||||
package multierror
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// formatFunc is a function used to format the string representing of a MultiError. It is used in the
|
|
||||||
// Error() function.
|
|
||||||
//
|
|
||||||
// It is marked out here to indicate how we could potentially extend MultiError in the future to
|
|
||||||
// support more styles of converting from a list of error's to a string.
|
|
||||||
//nolint: gochecknoglobals
|
|
||||||
var formatFunc func(errs MultiError, sb *strings.Builder) = defaultFormat
|
|
||||||
|
|
||||||
// MultiError holds a list of error's, that could potentially be empty.
|
|
||||||
//
|
|
||||||
// Use New() to create a MultiError.
|
|
||||||
type MultiError []error
|
|
||||||
|
|
||||||
// New returns an empty MultiError.
|
|
||||||
func New() MultiError {
|
|
||||||
return make([]error, 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add adds an error to the MultiError. The provided err must not be nil.
|
|
||||||
func (m *MultiError) Add(err error) {
|
|
||||||
*m = append(*m, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Error implements the error.Error() interface method.
|
|
||||||
func (m MultiError) Error() string {
|
|
||||||
sb := strings.Builder{}
|
|
||||||
formatFunc(m, &sb)
|
|
||||||
return sb.String()
|
|
||||||
}
|
|
||||||
|
|
||||||
// ErrOrNil returns either nil, if there are no errors in this MultiError, or an error, otherwise.
|
|
||||||
func (m MultiError) ErrOrNil() error {
|
|
||||||
if len(m) > 0 {
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func defaultFormat(errs MultiError, sb *strings.Builder) {
|
|
||||||
_, _ = fmt.Fprintf(sb, "%d error(s):", len(errs))
|
|
||||||
for _, err := range errs {
|
|
||||||
_, _ = fmt.Fprintf(sb, "\n- %s", err.Error())
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,24 +0,0 @@
|
|||||||
// Copyright 2020 the Pinniped contributors. All Rights Reserved.
|
|
||||||
// SPDX-License-Identifier: Apache-2.0
|
|
||||||
|
|
||||||
package multierror
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestMultierror(t *testing.T) {
|
|
||||||
errs := New()
|
|
||||||
|
|
||||||
require.Nil(t, errs.ErrOrNil())
|
|
||||||
|
|
||||||
errs.Add(errors.New("some error 1"))
|
|
||||||
require.EqualError(t, errs.ErrOrNil(), "1 error(s):\n- some error 1")
|
|
||||||
|
|
||||||
errs.Add(errors.New("some error 2"))
|
|
||||||
errs.Add(errors.New("some error 3"))
|
|
||||||
require.EqualError(t, errs.ErrOrNil(), "3 error(s):\n- some error 1\n- some error 2\n- some error 3")
|
|
||||||
}
|
|
@ -11,8 +11,10 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||||
|
metainternalversion "k8s.io/apimachinery/pkg/apis/meta/internalversion"
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
"k8s.io/apimachinery/pkg/runtime"
|
||||||
|
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||||
"k8s.io/apimachinery/pkg/util/validation/field"
|
"k8s.io/apimachinery/pkg/util/validation/field"
|
||||||
"k8s.io/apiserver/pkg/authentication/user"
|
"k8s.io/apiserver/pkg/authentication/user"
|
||||||
"k8s.io/apiserver/pkg/registry/rest"
|
"k8s.io/apiserver/pkg/registry/rest"
|
||||||
@ -32,16 +34,18 @@ type TokenCredentialRequestAuthenticator interface {
|
|||||||
AuthenticateTokenCredentialRequest(ctx context.Context, req *loginapi.TokenCredentialRequest) (user.Info, error)
|
AuthenticateTokenCredentialRequest(ctx context.Context, req *loginapi.TokenCredentialRequest) (user.Info, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewREST(authenticator TokenCredentialRequestAuthenticator, issuer CertIssuer) *REST {
|
func NewREST(authenticator TokenCredentialRequestAuthenticator, issuer CertIssuer, resource schema.GroupResource) *REST {
|
||||||
return &REST{
|
return &REST{
|
||||||
authenticator: authenticator,
|
authenticator: authenticator,
|
||||||
issuer: issuer,
|
issuer: issuer,
|
||||||
|
tableConvertor: rest.NewDefaultTableConvertor(resource),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type REST struct {
|
type REST struct {
|
||||||
authenticator TokenCredentialRequestAuthenticator
|
authenticator TokenCredentialRequestAuthenticator
|
||||||
issuer CertIssuer
|
issuer CertIssuer
|
||||||
|
tableConvertor rest.TableConvertor
|
||||||
}
|
}
|
||||||
|
|
||||||
// Assert that our *REST implements all the optional interfaces that we expect it to implement.
|
// Assert that our *REST implements all the optional interfaces that we expect it to implement.
|
||||||
@ -51,12 +55,30 @@ var _ interface {
|
|||||||
rest.Scoper
|
rest.Scoper
|
||||||
rest.Storage
|
rest.Storage
|
||||||
rest.CategoriesProvider
|
rest.CategoriesProvider
|
||||||
|
rest.Lister
|
||||||
} = (*REST)(nil)
|
} = (*REST)(nil)
|
||||||
|
|
||||||
func (*REST) New() runtime.Object {
|
func (*REST) New() runtime.Object {
|
||||||
return &loginapi.TokenCredentialRequest{}
|
return &loginapi.TokenCredentialRequest{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (*REST) NewList() runtime.Object {
|
||||||
|
return &loginapi.TokenCredentialRequestList{}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*REST) List(_ context.Context, _ *metainternalversion.ListOptions) (runtime.Object, error) {
|
||||||
|
return &loginapi.TokenCredentialRequestList{
|
||||||
|
ListMeta: metav1.ListMeta{
|
||||||
|
ResourceVersion: "0", // this resource version means "from the API server cache"
|
||||||
|
},
|
||||||
|
Items: []loginapi.TokenCredentialRequest{}, // avoid sending nil items list
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *REST) ConvertToTable(ctx context.Context, obj runtime.Object, tableOptions runtime.Object) (*metav1.Table, error) {
|
||||||
|
return r.tableConvertor.ConvertToTable(ctx, obj, tableOptions)
|
||||||
|
}
|
||||||
|
|
||||||
func (*REST) NamespaceScoped() bool {
|
func (*REST) NamespaceScoped() bool {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
@ -17,6 +17,7 @@ import (
|
|||||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
"k8s.io/apimachinery/pkg/runtime"
|
||||||
|
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||||
"k8s.io/apiserver/pkg/authentication/user"
|
"k8s.io/apiserver/pkg/authentication/user"
|
||||||
genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
|
genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
|
||||||
"k8s.io/apiserver/pkg/registry/rest"
|
"k8s.io/apiserver/pkg/registry/rest"
|
||||||
@ -28,11 +29,34 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func TestNew(t *testing.T) {
|
func TestNew(t *testing.T) {
|
||||||
r := NewREST(nil, nil)
|
r := NewREST(nil, nil, schema.GroupResource{Group: "bears", Resource: "panda"})
|
||||||
require.NotNil(t, r)
|
require.NotNil(t, r)
|
||||||
require.True(t, r.NamespaceScoped())
|
require.True(t, r.NamespaceScoped())
|
||||||
require.Equal(t, []string{"pinniped"}, r.Categories())
|
require.Equal(t, []string{"pinniped"}, r.Categories())
|
||||||
require.IsType(t, &loginapi.TokenCredentialRequest{}, r.New())
|
require.IsType(t, &loginapi.TokenCredentialRequest{}, r.New())
|
||||||
|
require.IsType(t, &loginapi.TokenCredentialRequestList{}, r.NewList())
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// check the simple invariants of our no-op list
|
||||||
|
list, err := r.List(ctx, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, list)
|
||||||
|
require.IsType(t, &loginapi.TokenCredentialRequestList{}, list)
|
||||||
|
require.Equal(t, "0", list.(*loginapi.TokenCredentialRequestList).ResourceVersion)
|
||||||
|
require.NotNil(t, list.(*loginapi.TokenCredentialRequestList).Items)
|
||||||
|
require.Len(t, list.(*loginapi.TokenCredentialRequestList).Items, 0)
|
||||||
|
|
||||||
|
// make sure we can turn lists into tables if needed
|
||||||
|
table, err := r.ConvertToTable(ctx, list, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, table)
|
||||||
|
require.Equal(t, "0", table.ResourceVersion)
|
||||||
|
require.Nil(t, table.Rows)
|
||||||
|
|
||||||
|
// exercise group resource - force error by passing a runtime.Object that does not have an embedded object meta
|
||||||
|
_, err = r.ConvertToTable(ctx, &metav1.APIGroup{}, nil)
|
||||||
|
require.Error(t, err, "the resource panda.bears does not support being converted to a Table")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestCreate(t *testing.T) {
|
func TestCreate(t *testing.T) {
|
||||||
@ -73,7 +97,7 @@ func TestCreate(t *testing.T) {
|
|||||||
5*time.Minute,
|
5*time.Minute,
|
||||||
).Return([]byte("test-cert"), []byte("test-key"), nil)
|
).Return([]byte("test-cert"), []byte("test-key"), nil)
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, issuer)
|
storage := NewREST(requestAuthenticator, issuer, schema.GroupResource{})
|
||||||
|
|
||||||
response, err := callCreate(context.Background(), storage, req)
|
response, err := callCreate(context.Background(), storage, req)
|
||||||
|
|
||||||
@ -112,7 +136,7 @@ func TestCreate(t *testing.T) {
|
|||||||
IssuePEM(gomock.Any(), gomock.Any(), gomock.Any()).
|
IssuePEM(gomock.Any(), gomock.Any(), gomock.Any()).
|
||||||
Return(nil, nil, fmt.Errorf("some certificate authority error"))
|
Return(nil, nil, fmt.Errorf("some certificate authority error"))
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, issuer)
|
storage := NewREST(requestAuthenticator, issuer, schema.GroupResource{})
|
||||||
|
|
||||||
response, err := callCreate(context.Background(), storage, req)
|
response, err := callCreate(context.Background(), storage, req)
|
||||||
requireSuccessfulResponseWithAuthenticationFailureMessage(t, err, response)
|
requireSuccessfulResponseWithAuthenticationFailureMessage(t, err, response)
|
||||||
@ -125,7 +149,7 @@ func TestCreate(t *testing.T) {
|
|||||||
requestAuthenticator := credentialrequestmocks.NewMockTokenCredentialRequestAuthenticator(ctrl)
|
requestAuthenticator := credentialrequestmocks.NewMockTokenCredentialRequestAuthenticator(ctrl)
|
||||||
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).Return(nil, nil)
|
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).Return(nil, nil)
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, nil)
|
storage := NewREST(requestAuthenticator, nil, schema.GroupResource{})
|
||||||
|
|
||||||
response, err := callCreate(context.Background(), storage, req)
|
response, err := callCreate(context.Background(), storage, req)
|
||||||
|
|
||||||
@ -140,7 +164,7 @@ func TestCreate(t *testing.T) {
|
|||||||
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).
|
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).
|
||||||
Return(nil, errors.New("some webhook error"))
|
Return(nil, errors.New("some webhook error"))
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, nil)
|
storage := NewREST(requestAuthenticator, nil, schema.GroupResource{})
|
||||||
|
|
||||||
response, err := callCreate(context.Background(), storage, req)
|
response, err := callCreate(context.Background(), storage, req)
|
||||||
|
|
||||||
@ -155,7 +179,7 @@ func TestCreate(t *testing.T) {
|
|||||||
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).
|
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req).
|
||||||
Return(&user.DefaultInfo{Name: ""}, nil)
|
Return(&user.DefaultInfo{Name: ""}, nil)
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, nil)
|
storage := NewREST(requestAuthenticator, nil, schema.GroupResource{})
|
||||||
|
|
||||||
response, err := callCreate(context.Background(), storage, req)
|
response, err := callCreate(context.Background(), storage, req)
|
||||||
|
|
||||||
@ -165,7 +189,7 @@ func TestCreate(t *testing.T) {
|
|||||||
|
|
||||||
it("CreateFailsWhenGivenTheWrongInputType", func() {
|
it("CreateFailsWhenGivenTheWrongInputType", func() {
|
||||||
notACredentialRequest := runtime.Unknown{}
|
notACredentialRequest := runtime.Unknown{}
|
||||||
response, err := NewREST(nil, nil).Create(
|
response, err := NewREST(nil, nil, schema.GroupResource{}).Create(
|
||||||
genericapirequest.NewContext(),
|
genericapirequest.NewContext(),
|
||||||
¬ACredentialRequest,
|
¬ACredentialRequest,
|
||||||
rest.ValidateAllObjectFunc,
|
rest.ValidateAllObjectFunc,
|
||||||
@ -176,7 +200,7 @@ func TestCreate(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("CreateFailsWhenTokenValueIsEmptyInRequest", func() {
|
it("CreateFailsWhenTokenValueIsEmptyInRequest", func() {
|
||||||
storage := NewREST(nil, nil)
|
storage := NewREST(nil, nil, schema.GroupResource{})
|
||||||
response, err := callCreate(context.Background(), storage, credentialRequest(loginapi.TokenCredentialRequestSpec{
|
response, err := callCreate(context.Background(), storage, credentialRequest(loginapi.TokenCredentialRequestSpec{
|
||||||
Token: "",
|
Token: "",
|
||||||
}))
|
}))
|
||||||
@ -187,7 +211,7 @@ func TestCreate(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("CreateFailsWhenValidationFails", func() {
|
it("CreateFailsWhenValidationFails", func() {
|
||||||
storage := NewREST(nil, nil)
|
storage := NewREST(nil, nil, schema.GroupResource{})
|
||||||
response, err := storage.Create(
|
response, err := storage.Create(
|
||||||
context.Background(),
|
context.Background(),
|
||||||
validCredentialRequest(),
|
validCredentialRequest(),
|
||||||
@ -207,7 +231,7 @@ func TestCreate(t *testing.T) {
|
|||||||
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req.DeepCopy()).
|
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req.DeepCopy()).
|
||||||
Return(&user.DefaultInfo{Name: "test-user"}, nil)
|
Return(&user.DefaultInfo{Name: "test-user"}, nil)
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, successfulIssuer(ctrl))
|
storage := NewREST(requestAuthenticator, successfulIssuer(ctrl), schema.GroupResource{})
|
||||||
response, err := storage.Create(
|
response, err := storage.Create(
|
||||||
context.Background(),
|
context.Background(),
|
||||||
req,
|
req,
|
||||||
@ -228,7 +252,7 @@ func TestCreate(t *testing.T) {
|
|||||||
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req.DeepCopy()).
|
requestAuthenticator.EXPECT().AuthenticateTokenCredentialRequest(gomock.Any(), req.DeepCopy()).
|
||||||
Return(&user.DefaultInfo{Name: "test-user"}, nil)
|
Return(&user.DefaultInfo{Name: "test-user"}, nil)
|
||||||
|
|
||||||
storage := NewREST(requestAuthenticator, successfulIssuer(ctrl))
|
storage := NewREST(requestAuthenticator, successfulIssuer(ctrl), schema.GroupResource{})
|
||||||
validationFunctionWasCalled := false
|
validationFunctionWasCalled := false
|
||||||
var validationFunctionSawTokenValue string
|
var validationFunctionSawTokenValue string
|
||||||
response, err := storage.Create(
|
response, err := storage.Create(
|
||||||
@ -248,7 +272,7 @@ func TestCreate(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("CreateFailsWhenRequestOptionsDryRunIsNotEmpty", func() {
|
it("CreateFailsWhenRequestOptionsDryRunIsNotEmpty", func() {
|
||||||
response, err := NewREST(nil, nil).Create(
|
response, err := NewREST(nil, nil, schema.GroupResource{}).Create(
|
||||||
genericapirequest.NewContext(),
|
genericapirequest.NewContext(),
|
||||||
validCredentialRequest(),
|
validCredentialRequest(),
|
||||||
rest.ValidateAllObjectFunc,
|
rest.ValidateAllObjectFunc,
|
||||||
|
@ -30,6 +30,7 @@ import (
|
|||||||
|
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
"k8s.io/apimachinery/pkg/runtime"
|
||||||
|
"k8s.io/apimachinery/pkg/util/errors"
|
||||||
kubescheme "k8s.io/client-go/kubernetes/scheme"
|
kubescheme "k8s.io/client-go/kubernetes/scheme"
|
||||||
restclient "k8s.io/client-go/rest"
|
restclient "k8s.io/client-go/rest"
|
||||||
aggregatorclientscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
|
aggregatorclientscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
|
||||||
@ -37,7 +38,6 @@ import (
|
|||||||
pinnipedconciergeclientsetscheme "go.pinniped.dev/generated/1.20/client/concierge/clientset/versioned/scheme"
|
pinnipedconciergeclientsetscheme "go.pinniped.dev/generated/1.20/client/concierge/clientset/versioned/scheme"
|
||||||
pinnipedsupervisorclientsetscheme "go.pinniped.dev/generated/1.20/client/supervisor/clientset/versioned/scheme"
|
pinnipedsupervisorclientsetscheme "go.pinniped.dev/generated/1.20/client/supervisor/clientset/versioned/scheme"
|
||||||
"go.pinniped.dev/internal/httputil/httperr"
|
"go.pinniped.dev/internal/httputil/httperr"
|
||||||
"go.pinniped.dev/internal/multierror"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Start starts an httptest.Server (with TLS) that pretends to be a Kube API server.
|
// Start starts an httptest.Server (with TLS) that pretends to be a Kube API server.
|
||||||
@ -107,7 +107,7 @@ func decodeObj(r *http.Request) (runtime.Object, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var obj runtime.Object
|
var obj runtime.Object
|
||||||
multiErr := multierror.New()
|
var errs []error //nolint: prealloc
|
||||||
codecsThatWeUseInOurCode := []runtime.NegotiatedSerializer{
|
codecsThatWeUseInOurCode := []runtime.NegotiatedSerializer{
|
||||||
kubescheme.Codecs,
|
kubescheme.Codecs,
|
||||||
aggregatorclientscheme.Codecs,
|
aggregatorclientscheme.Codecs,
|
||||||
@ -119,9 +119,9 @@ func decodeObj(r *http.Request) (runtime.Object, error) {
|
|||||||
if err == nil {
|
if err == nil {
|
||||||
return obj, nil
|
return obj, nil
|
||||||
}
|
}
|
||||||
multiErr.Add(err)
|
errs = append(errs, err)
|
||||||
}
|
}
|
||||||
return nil, multiErr.ErrOrNil()
|
return nil, errors.NewAggregate(errs)
|
||||||
}
|
}
|
||||||
|
|
||||||
func tryDecodeObj(
|
func tryDecodeObj(
|
||||||
|
@ -12,7 +12,7 @@ import (
|
|||||||
"net/url"
|
"net/url"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
corev1 "k8s.io/api/core/v1"
|
v1 "k8s.io/api/core/v1"
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
clientauthenticationv1beta1 "k8s.io/client-go/pkg/apis/clientauthentication/v1beta1"
|
clientauthenticationv1beta1 "k8s.io/client-go/pkg/apis/clientauthentication/v1beta1"
|
||||||
"k8s.io/client-go/tools/clientcmd"
|
"k8s.io/client-go/tools/clientcmd"
|
||||||
@ -35,7 +35,8 @@ type Option func(*Client) error
|
|||||||
// Client is a configuration for talking to the Pinniped concierge.
|
// Client is a configuration for talking to the Pinniped concierge.
|
||||||
type Client struct {
|
type Client struct {
|
||||||
namespace string
|
namespace string
|
||||||
authenticator *corev1.TypedLocalObjectReference
|
authenticatorName string
|
||||||
|
authenticatorKind string
|
||||||
caBundle string
|
caBundle string
|
||||||
endpoint *url.URL
|
endpoint *url.URL
|
||||||
apiGroupSuffix string
|
apiGroupSuffix string
|
||||||
@ -55,18 +56,15 @@ func WithAuthenticator(authType, authName string) Option {
|
|||||||
if authName == "" {
|
if authName == "" {
|
||||||
return fmt.Errorf("authenticator name must not be empty")
|
return fmt.Errorf("authenticator name must not be empty")
|
||||||
}
|
}
|
||||||
authenticator := corev1.TypedLocalObjectReference{Name: authName}
|
c.authenticatorName = authName
|
||||||
switch strings.ToLower(authType) {
|
switch strings.ToLower(authType) {
|
||||||
case "webhook":
|
case "webhook":
|
||||||
authenticator.APIGroup = &auth1alpha1.SchemeGroupVersion.Group
|
c.authenticatorKind = "WebhookAuthenticator"
|
||||||
authenticator.Kind = "WebhookAuthenticator"
|
|
||||||
case "jwt":
|
case "jwt":
|
||||||
authenticator.APIGroup = &auth1alpha1.SchemeGroupVersion.Group
|
c.authenticatorKind = "JWTAuthenticator"
|
||||||
authenticator.Kind = "JWTAuthenticator"
|
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf(`invalid authenticator type: %q, supported values are "webhook" and "jwt"`, authType)
|
return fmt.Errorf(`invalid authenticator type: %q, supported values are "webhook" and "jwt"`, authType)
|
||||||
}
|
}
|
||||||
c.authenticator = &authenticator
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -133,7 +131,7 @@ func New(opts ...Option) (*Client, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if c.authenticator == nil {
|
if c.authenticatorName == "" {
|
||||||
return nil, fmt.Errorf("WithAuthenticator must be specified")
|
return nil, fmt.Errorf("WithAuthenticator must be specified")
|
||||||
}
|
}
|
||||||
if c.endpoint == nil {
|
if c.endpoint == nil {
|
||||||
@ -180,13 +178,18 @@ func (c *Client) ExchangeToken(ctx context.Context, token string) (*clientauthen
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
replacedAPIGroupName, _ := groupsuffix.Replace(auth1alpha1.SchemeGroupVersion.Group, c.apiGroupSuffix)
|
||||||
resp, err := clientset.LoginV1alpha1().TokenCredentialRequests(c.namespace).Create(ctx, &loginv1alpha1.TokenCredentialRequest{
|
resp, err := clientset.LoginV1alpha1().TokenCredentialRequests(c.namespace).Create(ctx, &loginv1alpha1.TokenCredentialRequest{
|
||||||
ObjectMeta: metav1.ObjectMeta{
|
ObjectMeta: metav1.ObjectMeta{
|
||||||
Namespace: c.namespace,
|
Namespace: c.namespace,
|
||||||
},
|
},
|
||||||
Spec: loginv1alpha1.TokenCredentialRequestSpec{
|
Spec: loginv1alpha1.TokenCredentialRequestSpec{
|
||||||
Token: token,
|
Token: token,
|
||||||
Authenticator: *c.authenticator,
|
Authenticator: v1.TypedLocalObjectReference{
|
||||||
|
APIGroup: &replacedAPIGroupName,
|
||||||
|
Kind: c.authenticatorKind,
|
||||||
|
Name: c.authenticatorName,
|
||||||
|
},
|
||||||
},
|
},
|
||||||
}, metav1.CreateOptions{})
|
}, metav1.CreateOptions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -21,6 +21,7 @@ import (
|
|||||||
|
|
||||||
loginv1alpha1 "go.pinniped.dev/generated/1.20/apis/concierge/login/v1alpha1"
|
loginv1alpha1 "go.pinniped.dev/generated/1.20/apis/concierge/login/v1alpha1"
|
||||||
"go.pinniped.dev/internal/certauthority"
|
"go.pinniped.dev/internal/certauthority"
|
||||||
|
"go.pinniped.dev/internal/here"
|
||||||
"go.pinniped.dev/internal/testutil"
|
"go.pinniped.dev/internal/testutil"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -111,7 +112,7 @@ func TestNew(t *testing.T) {
|
|||||||
WithEndpoint("https://example.com"),
|
WithEndpoint("https://example.com"),
|
||||||
WithAPIGroupSuffix(""),
|
WithAPIGroupSuffix(""),
|
||||||
},
|
},
|
||||||
wantErr: "invalid api group suffix: 2 error(s):\n- must contain '.'\n- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
wantErr: "invalid api group suffix: [must contain '.', a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid api group suffix",
|
name: "invalid api group suffix",
|
||||||
@ -120,7 +121,7 @@ func TestNew(t *testing.T) {
|
|||||||
WithEndpoint("https://example.com"),
|
WithEndpoint("https://example.com"),
|
||||||
WithAPIGroupSuffix(".starts.with.dot"),
|
WithAPIGroupSuffix(".starts.with.dot"),
|
||||||
},
|
},
|
||||||
wantErr: "invalid api group suffix: 1 error(s):\n- a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
wantErr: "invalid api group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "valid",
|
name: "valid",
|
||||||
@ -220,47 +221,7 @@ func TestExchangeToken(t *testing.T) {
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
expires := metav1.NewTime(time.Now().Truncate(time.Second))
|
expires := metav1.NewTime(time.Now().Truncate(time.Second))
|
||||||
|
|
||||||
// Start a test server that returns successfully and asserts various properties of the request.
|
caBundle, endpoint := runFakeServer(t, expires, "pinniped.dev")
|
||||||
caBundle, endpoint := testutil.TLSTestServer(t, func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
require.Equal(t, http.MethodPost, r.Method)
|
|
||||||
require.Equal(t, "/apis/login.concierge.pinniped.dev/v1alpha1/namespaces/test-namespace/tokencredentialrequests", r.URL.Path)
|
|
||||||
require.Equal(t, "application/json", r.Header.Get("content-type"))
|
|
||||||
|
|
||||||
body, err := ioutil.ReadAll(r.Body)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.JSONEq(t,
|
|
||||||
`{
|
|
||||||
"kind": "TokenCredentialRequest",
|
|
||||||
"apiVersion": "login.concierge.pinniped.dev/v1alpha1",
|
|
||||||
"metadata": {
|
|
||||||
"creationTimestamp": null,
|
|
||||||
"namespace": "test-namespace"
|
|
||||||
},
|
|
||||||
"spec": {
|
|
||||||
"token": "test-token",
|
|
||||||
"authenticator": {
|
|
||||||
"apiGroup": "authentication.concierge.pinniped.dev",
|
|
||||||
"kind": "WebhookAuthenticator",
|
|
||||||
"name": "test-webhook"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"status": {}
|
|
||||||
}`,
|
|
||||||
string(body),
|
|
||||||
)
|
|
||||||
|
|
||||||
w.Header().Set("content-type", "application/json")
|
|
||||||
_ = json.NewEncoder(w).Encode(&loginv1alpha1.TokenCredentialRequest{
|
|
||||||
TypeMeta: metav1.TypeMeta{APIVersion: "login.concierge.pinniped.dev/v1alpha1", Kind: "TokenCredentialRequest"},
|
|
||||||
Status: loginv1alpha1.TokenCredentialRequestStatus{
|
|
||||||
Credential: &loginv1alpha1.ClusterCredential{
|
|
||||||
ExpirationTimestamp: expires,
|
|
||||||
ClientCertificateData: "test-certificate",
|
|
||||||
ClientKeyData: "test-key",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
client, err := New(WithNamespace("test-namespace"), WithEndpoint(endpoint), WithCABundle(caBundle), WithAuthenticator("webhook", "test-webhook"))
|
client, err := New(WithNamespace("test-namespace"), WithEndpoint(endpoint), WithCABundle(caBundle), WithAuthenticator("webhook", "test-webhook"))
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@ -279,4 +240,78 @@ func TestExchangeToken(t *testing.T) {
|
|||||||
},
|
},
|
||||||
}, got)
|
}, got)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
t.Run("changing the API group suffix for the client sends the custom suffix on the CredentialRequest's APIGroup and on its spec.Authenticator.APIGroup", func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
expires := metav1.NewTime(time.Now().Truncate(time.Second))
|
||||||
|
|
||||||
|
caBundle, endpoint := runFakeServer(t, expires, "suffix.com")
|
||||||
|
|
||||||
|
client, err := New(WithAPIGroupSuffix("suffix.com"), WithNamespace("test-namespace"), WithEndpoint(endpoint), WithCABundle(caBundle), WithAuthenticator("webhook", "test-webhook"))
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
got, err := client.ExchangeToken(ctx, "test-token")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, &clientauthenticationv1beta1.ExecCredential{
|
||||||
|
TypeMeta: metav1.TypeMeta{
|
||||||
|
Kind: "ExecCredential",
|
||||||
|
APIVersion: "client.authentication.k8s.io/v1beta1",
|
||||||
|
},
|
||||||
|
Status: &clientauthenticationv1beta1.ExecCredentialStatus{
|
||||||
|
ClientCertificateData: "test-certificate",
|
||||||
|
ClientKeyData: "test-key",
|
||||||
|
ExpirationTimestamp: &expires,
|
||||||
|
},
|
||||||
|
}, got)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start a test server that returns successfully and asserts various properties of the request.
|
||||||
|
func runFakeServer(t *testing.T, expires metav1.Time, pinnipedAPIGroupSuffix string) (string, string) {
|
||||||
|
caBundle, endpoint := testutil.TLSTestServer(t, func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
require.Equal(t, http.MethodPost, r.Method)
|
||||||
|
require.Equal(t,
|
||||||
|
fmt.Sprintf("/apis/login.concierge.%s/v1alpha1/namespaces/test-namespace/tokencredentialrequests", pinnipedAPIGroupSuffix),
|
||||||
|
r.URL.Path)
|
||||||
|
require.Equal(t, "application/json", r.Header.Get("content-type"))
|
||||||
|
|
||||||
|
body, err := ioutil.ReadAll(r.Body)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.JSONEq(t, here.Docf(
|
||||||
|
`{
|
||||||
|
"kind": "TokenCredentialRequest",
|
||||||
|
"apiVersion": "login.concierge.%s/v1alpha1",
|
||||||
|
"metadata": {
|
||||||
|
"creationTimestamp": null,
|
||||||
|
"namespace": "test-namespace"
|
||||||
|
},
|
||||||
|
"spec": {
|
||||||
|
"token": "test-token",
|
||||||
|
"authenticator": {
|
||||||
|
"apiGroup": "authentication.concierge.%s",
|
||||||
|
"kind": "WebhookAuthenticator",
|
||||||
|
"name": "test-webhook"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"status": {}
|
||||||
|
}`, pinnipedAPIGroupSuffix, pinnipedAPIGroupSuffix),
|
||||||
|
string(body),
|
||||||
|
)
|
||||||
|
|
||||||
|
w.Header().Set("content-type", "application/json")
|
||||||
|
_ = json.NewEncoder(w).Encode(&loginv1alpha1.TokenCredentialRequest{
|
||||||
|
TypeMeta: metav1.TypeMeta{
|
||||||
|
APIVersion: fmt.Sprintf("login.concierge.%s/v1alpha1", pinnipedAPIGroupSuffix),
|
||||||
|
Kind: "TokenCredentialRequest",
|
||||||
|
},
|
||||||
|
Status: loginv1alpha1.TokenCredentialRequestStatus{
|
||||||
|
Credential: &loginv1alpha1.ClusterCredential{
|
||||||
|
ExpirationTimestamp: expires,
|
||||||
|
ClientCertificateData: "test-certificate",
|
||||||
|
ClientKeyData: "test-key",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
return caBundle, endpoint
|
||||||
}
|
}
|
||||||
|
146
site/content/posts/multiple-pinnipeds.md
Normal file
146
site/content/posts/multiple-pinnipeds.md
Normal file
@ -0,0 +1,146 @@
|
|||||||
|
---
|
||||||
|
title: "Pinniped v0.5.0: Now With Even More Pinnipeds"
|
||||||
|
slug: multiple-pinnipeds
|
||||||
|
date: 2021-02-04
|
||||||
|
author: Matt Moyer
|
||||||
|
image: https://images.unsplash.com/photo-1558060370-d644479cb6f7?ixid=MXwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHw%3D&ixlib=rb-1.2.1&auto=format&fit=crop&w=2000&q=80
|
||||||
|
excerpt: "We encountered a problem that’s familiar to many Kubernetes controller developers: we need to support multiple instances of our controller on one cluster."
|
||||||
|
tags: ['Matt Moyer', 'api', 'release']
|
||||||
|
---
|
||||||
|
|
||||||
|
![toy robots](https://images.unsplash.com/photo-1558060370-d644479cb6f7?ixid=MXwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHw%3D&ixlib=rb-1.2.1&auto=format&fit=crop&w=2000&q=80)
|
||||||
|
*Photo by [TRINH HUY HUNG](https://unsplash.com/@hungthdsn) on [Unsplash](https://unsplash.com/)*
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
Pinniped is a "batteries included" authentication system for Kubernetes clusters that tightly integrates with Kubernetes using native API patterns.
|
||||||
|
Pinniped is built using [custom resource definitions (CRDs)][crd] and [API aggregation][api-aggregation], both of which are core to the configuration and runtime operation of the app.
|
||||||
|
|
||||||
|
We encountered a problem that’s familiar to many Kubernetes controller developers: *we need to support multiple instances of our controller on one cluster*.
|
||||||
|
|
||||||
|
You may have a similar need for several reasons, such as:
|
||||||
|
|
||||||
|
1. **Soft Multi-Tenancy:** several teams share a cluster and they each want to manage their own instance of a controller.
|
||||||
|
3. **Scaling:** you have outgrown the vertical scaling limit for your controller and would like to shard it along some dimension that’s easy to operate and reason about.
|
||||||
|
4. **Backwards Compatibility:** you want to deploy two versions of your controller and provide a window of time for consumers to smoothly upgrade to the new version.
|
||||||
|
5. **Controller Development:** you want to run, for example, the *stable* and *alpha* versions of your controller on the same cluster. Most cluster users will only rely on the stable version, but some test workloads will use the alpha version.
|
||||||
|
|
||||||
|
With [Pinniped v0.5.0](https://github.com/vmware-tanzu/pinniped/releases/v0.5.0), we wanted to be able to bundle an opinionated configuration of Pinniped into our downstream commercial products while also allowing our customers to install their own Pinniped instance and configure it however they like.
|
||||||
|
|
||||||
|
This post describes how we approached the need for multiple Pinnipeds in v0.5.0.
|
||||||
|
|
||||||
|
## Existing Approaches
|
||||||
|
|
||||||
|
For many Kubernetes controllers, there are existing best practices that will work well:
|
||||||
|
|
||||||
|
1. **Add a "controller class" field:** the most well-known example of this pattern is the `spec.ingressClassName` field in the [`networking.k8s.io/v1` Ingress resource][ingress-spec] (formerly the `kubernetes.io/ingress.class` annotation).
|
||||||
|
|
||||||
|
This field tags a particular object so that only the designated controller instance will watch it.
|
||||||
|
This means that you must configure all the participating controllers to do the proper filtering and ignore any resources that they're not intended to manage.
|
||||||
|
|
||||||
|
1. **Use API versioning machinery:** the other key technique is to strictly adhere to Kubernetes API contracts and take advantage of Kubernetes versioning machinery.
|
||||||
|
Your CRD can have multiple versions and you can write a webhook to handle gracefully converting between versions so that several versions of your controller can co-exist peacefully.
|
||||||
|
|
||||||
|
These two techniques are sufficient for many situations but have some limitations.
|
||||||
|
If your app uses [Kubernetes API aggregation][api-aggregation] then a controller class annotation may not be sufficient, since each version of your API group must be registered with a single [APIService][apiservice] resource.
|
||||||
|
|
||||||
|
Even in a purely CRD-based app, the CRD definition and associated [webhook conversion service][webhook-conversion] can only be defined once for each API type.
|
||||||
|
At a minimum, this requires that you carefully manage the deployment of these resources.
|
||||||
|
For example, in the soft multi-tenancy use case several teams must coordinate to deploy these singleton resources.
|
||||||
|
|
||||||
|
Building and maintaining webhook conversion functionality also carries a cost, especially if you need to handle many versions worth of version skew.
|
||||||
|
|
||||||
|
## Our Solution
|
||||||
|
|
||||||
|
Our solution is to have a single controller codebase where the names of all the API groups can be adjusted via configuration.
|
||||||
|
This is controlled via a new `--api-group-suffix` flag on the Pinniped server commands.
|
||||||
|
When unset, Pinniped defaults to the `pinniped.dev` API group, which is the "true" name we use in our API definitions and generated code.
|
||||||
|
|
||||||
|
When a user deploys Pinniped with a custom API group suffix such as `--api-group-suffix=pinniped1.example.com`, several new behaviors are triggered:
|
||||||
|
|
||||||
|
- **Templated Resources:** at install time, the Pinniped [ytt] templates will render renamed versions of CRD and APIService resources (via [`z0_crd_overlay.yaml`][ytt-crd-overlay] and [`deployment.yaml`][ytt-deployment]).
|
||||||
|
- **Outgoing Controller Requests:** throughout our controller code, we use a consistent set of Kubernetes clients via the [`go.pinniped.dev/internal/kubeclient`][kubeclient-client] package. These clients use [`k8s.io/client-go/rest#Config.Wrap`][rest-config-wrap] to inject a custom [`http.RoundTripper`][roundtripper] that can act as a client middleware layer.
|
||||||
|
|
||||||
|
For each outbound request from our controller API clients, the RoundTripper applies a set of transformations:
|
||||||
|
1. It decodes the request from JSON/Protobuf.
|
||||||
|
2. It rewrites the request's `apiVersion` to match the configured API group suffix.
|
||||||
|
3. It renames other API group references in well-known object fields such as [`metadata.ownerReferences`][ownerreferences].
|
||||||
|
4. It re-encodes the request for wire transport and passes it along to the server.
|
||||||
|
5. It decodes the response from JSON/Protobuf.
|
||||||
|
6. It apply the inverse renaming operation to reverse step three and restore the default API group suffix (`pinniped.dev`).
|
||||||
|
7. Finally, it re-encodes the response and passes it back to the client.
|
||||||
|
|
||||||
|
Steps 5-7 must also handle the case of streaming response to a `watch` request.
|
||||||
|
|
||||||
|
The business logic of these renaming operations is performed by the [`go.pinniped.dev/internal/groupsuffix`][groupsuffix] package, which returns a [`kubeclient.Middleware`][kubeclient-middleware] implementation.
|
||||||
|
- **Incoming Aggregated API Requests**: our aggregated API server is built using the [`k8s.io/apiserver/pkg/server`][apiserver-pkg] package. We have only a single aggregated API called TokenCredentialRequest, and we were able to get the functionality we needed by creating a custom [`k8s.io/apimachinery/pkg/runtime#Scheme`][runtime-scheme] that registers our API kinds under the custom group (in [`.../server.go`][custom-scheme]).
|
||||||
|
|
||||||
|
With this configuration, all the builtin functionality of the generic API server works correctly.
|
||||||
|
|
||||||
|
Requests and responses are unmarshaled and marshalled correctly, and the OpenAPI discovery API even serves the custom API group names.
|
||||||
|
|
||||||
|
- **App-Specific Code:** the Pinniped concierge server dynamically updates the TokenCredentialRequest APIService to rotate its TLS certificate authority bundle. This code had to become aware of the dynamic API group, but it was as easy as wiring through a new parameter from the CLI flag (see [`.../prepare_controllers.go`][prepare-controllers]).
|
||||||
|
|
||||||
|
With this system in place, we've achieved our goal. A user can deploy several instances of Pinniped, each interacting only with its own distinct set of API objects.
|
||||||
|
|
||||||
|
The default behavior of Pinniped remains unchanged, and we made sure to implement the changes such that they cause little to no overhead when no custom API group has been configured.
|
||||||
|
|
||||||
|
### Advantages and Disadvantages
|
||||||
|
|
||||||
|
With v0.5.0, each instance of Pinniped to be upgraded and operated 100% independently, with no coordination or shared state needed.
|
||||||
|
One remaining constraint is that each instance should be deployed into its own namespace.
|
||||||
|
This ensures that any other standard Kubernetes objects such as Secrets and ConfigMaps referenced by the configuration do not overlap.
|
||||||
|
|
||||||
|
Our middleware solution carries some ongoing costs:
|
||||||
|
|
||||||
|
- It took a non-trivial amount of code to implement all the required transformations.
|
||||||
|
We now have the maintenance burden of ensuring this code continues to work in future versions of the Kubernetes API machinery.
|
||||||
|
- Other API consumers (including `kubectl` users) need to know which API group to use.
|
||||||
|
This might be as simple as knowing to run `kubectl get jwtauthenticators.authentication.concierge.team1.example.com`
|
||||||
|
instead of simply `kubectl get jwtauthenticators`.
|
||||||
|
There is no builtin upgrade path between these versions, as there would be with a versioned CRD.
|
||||||
|
- The extra encoding/decoding steps cause some performance impact when this feature is in use.
|
||||||
|
None of the Pinniped APIs are used in high throughput use cases, so this was not much a problem for us.
|
||||||
|
|
||||||
|
|
||||||
|
## Future Work
|
||||||
|
|
||||||
|
We're happy to have shipped this for Pinniped v0.5.0, but we have more ideas about how to extend the concept.
|
||||||
|
One idea is to extract the renaming middleware we've written for Pinniped into a standalone Go library that other Kubernetes apps can adopt.
|
||||||
|
|
||||||
|
We could also take this a step further and extract the behavior of our middleware into an out-of-process API proxy that can apply these transformations to an unmodified Kubernetes app.
|
||||||
|
This would require major changes and it would be challenging to support some features seamlessly, such as Protobuf encoding.
|
||||||
|
|
||||||
|
As a team, we have no immediate plans for either of these ideas, but if you are interested please [reach out in GitHub][discussion].
|
||||||
|
|
||||||
|
## Join the Pinniped Community!
|
||||||
|
Pinniped is better because of our contributors and maintainers.
|
||||||
|
It is because of you that we can bring great software to the community.
|
||||||
|
|
||||||
|
Please join us during our online community meetings, occurring every first and third Thursday of the month at 9AM PT / 12PM PT.
|
||||||
|
Use [this Zoom link][zoom] to attend and add any agenda items you wish to discuss to [the notes document][meeting-notes].
|
||||||
|
|
||||||
|
Join our [Google Group][google-group] to receive invites to this meeting.
|
||||||
|
|
||||||
|
[api-aggregation]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
|
||||||
|
[apiserver-pkg]: https://pkg.go.dev/k8s.io/apiserver/pkg/server
|
||||||
|
[apiservice]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#apiservice-v1-apiregistration-k8s-io
|
||||||
|
[crd]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
|
||||||
|
[custom-scheme]: https://github.com/vmware-tanzu/pinniped/blob/main/internal/concierge/server/server.go#L182
|
||||||
|
[discussion]: https://github.com/vmware-tanzu/pinniped/discussions/386
|
||||||
|
[google-group]: https://go.pinniped.dev/community/group
|
||||||
|
[groupsuffix]: https://github.com/vmware-tanzu/pinniped/blob/main/internal/groupsuffix/groupsuffix.go
|
||||||
|
[ingress-spec]: https://kubernetes.io/docs/reference/kubernetes-api/services-resources/ingress-v1/#IngressSpec
|
||||||
|
[kubeclient-client]: https://github.com/vmware-tanzu/pinniped/blob/v0.5.0/internal/kubeclient/kubeclient.go#L22
|
||||||
|
[kubeclient-middleware]: https://github.com/vmware-tanzu/pinniped/blob/v0.5.0/internal/kubeclient/middleware.go#L17-L19
|
||||||
|
[meeting-notes]: https://go.pinniped.dev/community/agenda
|
||||||
|
[ownerreferences]: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents
|
||||||
|
[prepare-controllers]: https://github.com/vmware-tanzu/pinniped/blob/v0.5.0/internal/controllermanager/prepare_controllers.go#L116-L120
|
||||||
|
[rest-config-wrap]: https://pkg.go.dev/k8s.io/client-go/rest#Config.Wrap
|
||||||
|
[roundtripper]: https://golang.org/pkg/net/http/#RoundTripper
|
||||||
|
[runtime-scheme]: https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#Scheme
|
||||||
|
[webhook-conversion]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion
|
||||||
|
[ytt-crd-overlay]: https://github.com/vmware-tanzu/pinniped/blob/v0.5.0/deploy/concierge/z0_crd_overlay.yaml
|
||||||
|
[ytt-deployment]: https://github.com/vmware-tanzu/pinniped/blob/v0.5.0/deploy/concierge/deployment.yaml#L195
|
||||||
|
[ytt]: https://carvel.dev/ytt/
|
||||||
|
[zoom]: https://go.pinniped.dev/community/zoom
|
@ -1,7 +1,7 @@
|
|||||||
|
|
||||||
[[redirects]]
|
[[redirects]]
|
||||||
from = "/latest/*"
|
from = "/latest/*"
|
||||||
to = "https://github.com/vmware-tanzu/pinniped/releases/download/v0.4.1/:splat"
|
to = "https://github.com/vmware-tanzu/pinniped/releases/latest/download/:splat"
|
||||||
status = 302
|
status = 302
|
||||||
force = true
|
force = true
|
||||||
|
|
||||||
|
@ -6,13 +6,13 @@
|
|||||||
|
|
||||||
[[redirects]]
|
[[redirects]]
|
||||||
from = "/community/zoom"
|
from = "/community/zoom"
|
||||||
to = "https://vmware.zoom.us/j/94638309756?pwd=V3NvRXJIdDg5QVc0TUdFM2dYRzgrUT09"
|
to = "https://vmware.zoom.us/j/93798188973?pwd=T3pIMWxReEQvcWljNm1admRoZTFSZz09"
|
||||||
status = 302
|
status = 302
|
||||||
force = true
|
force = true
|
||||||
|
|
||||||
[[redirects]]
|
[[redirects]]
|
||||||
from = "/community/agenda"
|
from = "/community/agenda"
|
||||||
to = "https://docs.google.com/document/d/1qYA35wZV-6bxcH5375vOnIGkNBo7e4OROgsV4Sj8WjQ/edit?usp=sharing"
|
to = "https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?view"
|
||||||
status = 302
|
status = 302
|
||||||
force = true
|
force = true
|
||||||
|
|
||||||
@ -28,6 +28,12 @@
|
|||||||
status = 302
|
status = 302
|
||||||
force = true
|
force = true
|
||||||
|
|
||||||
|
[[redirects]]
|
||||||
|
from = "/community/youtube"
|
||||||
|
to = "https://www.youtube.com/playlist?list=PL7bmigfV0EqQ8qYn8ornuJnuGvCt0belt"
|
||||||
|
status = 302
|
||||||
|
force = true
|
||||||
|
|
||||||
[[redirects]]
|
[[redirects]]
|
||||||
from = "/*"
|
from = "/*"
|
||||||
to = "/index.html"
|
to = "/index.html"
|
||||||
|
96
test/integration/category_test.go
Normal file
96
test/integration/category_test.go
Normal file
@ -0,0 +1,96 @@
|
|||||||
|
// Copyright 2021 the Pinniped contributors. All Rights Reserved.
|
||||||
|
// SPDX-License-Identifier: Apache-2.0
|
||||||
|
|
||||||
|
package integration
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"os/exec"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"go.pinniped.dev/test/library"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGetPinnipedCategory(t *testing.T) {
|
||||||
|
env := library.IntegrationEnv(t)
|
||||||
|
dotSuffix := "." + env.APIGroupSuffix
|
||||||
|
|
||||||
|
t.Run("category, no special params", func(t *testing.T) {
|
||||||
|
var stdOut, stdErr bytes.Buffer
|
||||||
|
|
||||||
|
cmd := exec.Command("kubectl", "get", "pinniped", "-A")
|
||||||
|
cmd.Stdout = &stdOut
|
||||||
|
cmd.Stderr = &stdErr
|
||||||
|
err := cmd.Run()
|
||||||
|
require.NoError(t, err, stdErr.String(), stdOut.String())
|
||||||
|
require.Empty(t, stdErr.String())
|
||||||
|
|
||||||
|
require.NotContains(t, stdOut.String(), "MethodNotAllowed")
|
||||||
|
require.Contains(t, stdOut.String(), dotSuffix)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("category, table params", func(t *testing.T) {
|
||||||
|
var stdOut, stdErr bytes.Buffer
|
||||||
|
|
||||||
|
cmd := exec.Command("kubectl", "get", "pinniped", "-A", "-o", "wide", "-v", "10")
|
||||||
|
cmd.Stdout = &stdOut
|
||||||
|
cmd.Stderr = &stdErr
|
||||||
|
err := cmd.Run()
|
||||||
|
require.NoError(t, err, stdErr.String(), stdOut.String())
|
||||||
|
|
||||||
|
require.NotContains(t, stdOut.String(), "MethodNotAllowed")
|
||||||
|
require.Contains(t, stdOut.String(), dotSuffix)
|
||||||
|
|
||||||
|
require.Contains(t, stdErr.String(), `"kind":"Table"`)
|
||||||
|
require.Contains(t, stdErr.String(), `"resourceVersion":"0"`)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("list, no special params", func(t *testing.T) {
|
||||||
|
var stdOut, stdErr bytes.Buffer
|
||||||
|
|
||||||
|
//nolint: gosec // input is part of test env
|
||||||
|
cmd := exec.Command("kubectl", "get", "tokencredentialrequests.login.concierge"+dotSuffix, "-A")
|
||||||
|
cmd.Stdout = &stdOut
|
||||||
|
cmd.Stderr = &stdErr
|
||||||
|
err := cmd.Run()
|
||||||
|
require.NoError(t, err, stdErr.String(), stdOut.String())
|
||||||
|
require.Empty(t, stdOut.String())
|
||||||
|
|
||||||
|
require.NotContains(t, stdErr.String(), "MethodNotAllowed")
|
||||||
|
require.Contains(t, stdErr.String(), `No resources found`)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("list, table params", func(t *testing.T) {
|
||||||
|
var stdOut, stdErr bytes.Buffer
|
||||||
|
|
||||||
|
//nolint: gosec // input is part of test env
|
||||||
|
cmd := exec.Command("kubectl", "get", "tokencredentialrequests.login.concierge"+dotSuffix, "-A", "-o", "wide", "-v", "10")
|
||||||
|
cmd.Stdout = &stdOut
|
||||||
|
cmd.Stderr = &stdErr
|
||||||
|
err := cmd.Run()
|
||||||
|
require.NoError(t, err, stdErr.String(), stdOut.String())
|
||||||
|
require.Empty(t, stdOut.String())
|
||||||
|
|
||||||
|
require.NotContains(t, stdErr.String(), "MethodNotAllowed")
|
||||||
|
require.Contains(t, stdErr.String(), `"kind":"Table"`)
|
||||||
|
require.Contains(t, stdErr.String(), `"resourceVersion":"0"`)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("raw request to see body", func(t *testing.T) {
|
||||||
|
var stdOut, stdErr bytes.Buffer
|
||||||
|
|
||||||
|
//nolint: gosec // input is part of test env
|
||||||
|
cmd := exec.Command("kubectl", "get", "--raw", "/apis/login.concierge"+dotSuffix+"/v1alpha1/tokencredentialrequests")
|
||||||
|
cmd.Stdout = &stdOut
|
||||||
|
cmd.Stderr = &stdErr
|
||||||
|
err := cmd.Run()
|
||||||
|
require.NoError(t, err, stdErr.String(), stdOut.String())
|
||||||
|
require.Empty(t, stdErr.String())
|
||||||
|
|
||||||
|
require.NotContains(t, stdOut.String(), "MethodNotAllowed")
|
||||||
|
require.Contains(t, stdOut.String(), `{"kind":"TokenCredentialRequestList","apiVersion":"login.concierge`+
|
||||||
|
dotSuffix+`/v1alpha1","metadata":{"resourceVersion":"0"},"items":[]}`)
|
||||||
|
})
|
||||||
|
}
|
@ -35,6 +35,8 @@ import (
|
|||||||
func TestCLIGetKubeconfigStaticToken(t *testing.T) {
|
func TestCLIGetKubeconfigStaticToken(t *testing.T) {
|
||||||
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
// Create a test webhook configuration to use with the CLI.
|
// Create a test webhook configuration to use with the CLI.
|
||||||
ctx, cancelFunc := context.WithTimeout(context.Background(), 4*time.Minute)
|
ctx, cancelFunc := context.WithTimeout(context.Background(), 4*time.Minute)
|
||||||
defer cancelFunc()
|
defer cancelFunc()
|
||||||
|
@ -22,6 +22,8 @@ func TestAPIServingCertificateAutoCreationAndRotation(t *testing.T) {
|
|||||||
env := library.IntegrationEnv(t)
|
env := library.IntegrationEnv(t)
|
||||||
defaultServingCertResourceName := env.ConciergeAppName + "-api-tls-serving-certificate"
|
defaultServingCertResourceName := env.ConciergeAppName + "-api-tls-serving-certificate"
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
forceRotation func(context.Context, kubernetes.Interface, string) error
|
forceRotation func(context.Context, kubernetes.Interface, string) error
|
||||||
|
@ -57,6 +57,8 @@ var maskKey = func(s string) string { return strings.ReplaceAll(s, "TESTING KEY"
|
|||||||
func TestClient(t *testing.T) {
|
func TestClient(t *testing.T) {
|
||||||
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
@ -21,6 +21,8 @@ func TestCredentialIssuer(t *testing.T) {
|
|||||||
config := library.NewClientConfig(t)
|
config := library.NewClientConfig(t)
|
||||||
client := library.NewConciergeClientset(t)
|
client := library.NewConciergeClientset(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
@ -23,7 +23,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func TestUnsuccessfulCredentialRequest(t *testing.T) {
|
func TestUnsuccessfulCredentialRequest(t *testing.T) {
|
||||||
library.SkipUnlessIntegration(t)
|
env := library.IntegrationEnv(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@ -42,6 +44,8 @@ func TestUnsuccessfulCredentialRequest(t *testing.T) {
|
|||||||
func TestSuccessfulCredentialRequest(t *testing.T) {
|
func TestSuccessfulCredentialRequest(t *testing.T) {
|
||||||
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
@ -127,7 +131,9 @@ func TestSuccessfulCredentialRequest(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestFailedCredentialRequestWhenTheRequestIsValidButTheTokenDoesNotAuthenticateTheUser(t *testing.T) {
|
func TestFailedCredentialRequestWhenTheRequestIsValidButTheTokenDoesNotAuthenticateTheUser(t *testing.T) {
|
||||||
library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
response, err := makeRequest(context.Background(), t, loginv1alpha1.TokenCredentialRequestSpec{Token: "not a good token"})
|
response, err := makeRequest(context.Background(), t, loginv1alpha1.TokenCredentialRequestSpec{Token: "not a good token"})
|
||||||
|
|
||||||
@ -139,7 +145,9 @@ func TestFailedCredentialRequestWhenTheRequestIsValidButTheTokenDoesNotAuthentic
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestCredentialRequest_ShouldFailWhenRequestDoesNotIncludeToken(t *testing.T) {
|
func TestCredentialRequest_ShouldFailWhenRequestDoesNotIncludeToken(t *testing.T) {
|
||||||
library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
response, err := makeRequest(context.Background(), t, loginv1alpha1.TokenCredentialRequestSpec{Token: ""})
|
response, err := makeRequest(context.Background(), t, loginv1alpha1.TokenCredentialRequestSpec{Token: ""})
|
||||||
|
|
||||||
@ -158,7 +166,9 @@ func TestCredentialRequest_ShouldFailWhenRequestDoesNotIncludeToken(t *testing.T
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestCredentialRequest_OtherwiseValidRequestWithRealTokenShouldFailWhenTheClusterIsNotCapable(t *testing.T) {
|
func TestCredentialRequest_OtherwiseValidRequestWithRealTokenShouldFailWhenTheClusterIsNotCapable(t *testing.T) {
|
||||||
library.IntegrationEnv(t).WithoutCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithoutCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
@ -27,6 +27,8 @@ const (
|
|||||||
func TestKubeCertAgent(t *testing.T) {
|
func TestKubeCertAgent(t *testing.T) {
|
||||||
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
env := library.IntegrationEnv(t).WithCapability(library.ClusterSigningKeyIsAvailable)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
@ -46,6 +46,9 @@ func TestE2EFullIntegration(t *testing.T) {
|
|||||||
defer library.DumpLogs(t, env.SupervisorNamespace, "")
|
defer library.DumpLogs(t, env.SupervisorNamespace, "")
|
||||||
defer library.DumpLogs(t, "dex", "app=proxy")
|
defer library.DumpLogs(t, "dex", "app=proxy")
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.ConciergeNamespace, "")
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ctx, cancelFunc := context.WithTimeout(context.Background(), 5*time.Minute)
|
ctx, cancelFunc := context.WithTimeout(context.Background(), 5*time.Minute)
|
||||||
defer cancelFunc()
|
defer cancelFunc()
|
||||||
|
|
||||||
|
@ -72,7 +72,7 @@ func TestGetAPIResourceList(t *testing.T) {
|
|||||||
{
|
{
|
||||||
Name: "tokencredentialrequests",
|
Name: "tokencredentialrequests",
|
||||||
Kind: "TokenCredentialRequest",
|
Kind: "TokenCredentialRequest",
|
||||||
Verbs: []string{"create"},
|
Verbs: []string{"create", "list"},
|
||||||
Namespaced: true,
|
Namespaced: true,
|
||||||
Categories: []string{"pinniped"},
|
Categories: []string{"pinniped"},
|
||||||
},
|
},
|
||||||
|
@ -44,7 +44,10 @@ func TestSupervisorOIDCDiscovery(t *testing.T) {
|
|||||||
env := library.IntegrationEnv(t)
|
env := library.IntegrationEnv(t)
|
||||||
client := library.NewSupervisorClientset(t)
|
client := library.NewSupervisorClientset(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ns := env.SupervisorNamespace
|
ns := env.SupervisorNamespace
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
@ -148,6 +151,8 @@ func TestSupervisorTLSTerminationWithSNI(t *testing.T) {
|
|||||||
pinnipedClient := library.NewSupervisorClientset(t)
|
pinnipedClient := library.NewSupervisorClientset(t)
|
||||||
kubeClient := library.NewKubernetesClientset(t)
|
kubeClient := library.NewKubernetesClientset(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ns := env.SupervisorNamespace
|
ns := env.SupervisorNamespace
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@ -215,6 +220,8 @@ func TestSupervisorTLSTerminationWithDefaultCerts(t *testing.T) {
|
|||||||
pinnipedClient := library.NewSupervisorClientset(t)
|
pinnipedClient := library.NewSupervisorClientset(t)
|
||||||
kubeClient := library.NewKubernetesClientset(t)
|
kubeClient := library.NewKubernetesClientset(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ns := env.SupervisorNamespace
|
ns := env.SupervisorNamespace
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
// Copyright 2020 the Pinniped contributors. All Rights Reserved.
|
// Copyright 2020-2021 the Pinniped contributors. All Rights Reserved.
|
||||||
// SPDX-License-Identifier: Apache-2.0
|
// SPDX-License-Identifier: Apache-2.0
|
||||||
|
|
||||||
package integration
|
package integration
|
||||||
@ -29,6 +29,8 @@ func TestSupervisorHealthz(t *testing.T) {
|
|||||||
t.Skip("PINNIPED_TEST_SUPERVISOR_HTTP_ADDRESS not defined")
|
t.Skip("PINNIPED_TEST_SUPERVISOR_HTTP_ADDRESS not defined")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
@ -44,6 +44,8 @@ func TestSupervisorLogin(t *testing.T) {
|
|||||||
defer library.DumpLogs(t, env.SupervisorNamespace, "")
|
defer library.DumpLogs(t, env.SupervisorNamespace, "")
|
||||||
defer library.DumpLogs(t, "dex", "app=proxy")
|
defer library.DumpLogs(t, "dex", "app=proxy")
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
@ -24,6 +24,8 @@ func TestSupervisorSecrets(t *testing.T) {
|
|||||||
kubeClient := library.NewKubernetesClientset(t)
|
kubeClient := library.NewKubernetesClientset(t)
|
||||||
supervisorClient := library.NewSupervisorClientset(t)
|
supervisorClient := library.NewSupervisorClientset(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
|
@ -17,6 +17,8 @@ import (
|
|||||||
func TestSupervisorUpstreamOIDCDiscovery(t *testing.T) {
|
func TestSupervisorUpstreamOIDCDiscovery(t *testing.T) {
|
||||||
env := library.IntegrationEnv(t)
|
env := library.IntegrationEnv(t)
|
||||||
|
|
||||||
|
library.AssertNoRestartsDuringTest(t, env.SupervisorNamespace, "")
|
||||||
|
|
||||||
t.Run("invalid missing secret and bad issuer", func(t *testing.T) {
|
t.Run("invalid missing secret and bad issuer", func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
spec := v1alpha1.OIDCIdentityProviderSpec{
|
spec := v1alpha1.OIDCIdentityProviderSpec{
|
||||||
|
@ -4,10 +4,14 @@
|
|||||||
package library
|
package library
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
"k8s.io/apimachinery/pkg/util/wait"
|
"k8s.io/apimachinery/pkg/util/wait"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -24,3 +28,56 @@ func RequireEventuallyWithoutError(
|
|||||||
t.Helper()
|
t.Helper()
|
||||||
require.NoError(t, wait.PollImmediate(tick, waitFor, f), msgAndArgs...)
|
require.NoError(t, wait.PollImmediate(tick, waitFor, f), msgAndArgs...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewRestartAssertion allows a caller to assert that there were no restarts for a Pod in the
|
||||||
|
// provided namespace with the provided labelSelector during the lifetime of a test.
|
||||||
|
func AssertNoRestartsDuringTest(t *testing.T, namespace, labelSelector string) {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
previousRestartCounts := getRestartCounts(t, namespace, labelSelector)
|
||||||
|
|
||||||
|
t.Cleanup(func() {
|
||||||
|
currentRestartCounts := getRestartCounts(t, namespace, labelSelector)
|
||||||
|
|
||||||
|
for key, previousRestartCount := range previousRestartCounts {
|
||||||
|
currentRestartCount, ok := currentRestartCounts[key]
|
||||||
|
if assert.Truef(
|
||||||
|
t,
|
||||||
|
ok,
|
||||||
|
"pod namespace/name/container %s existed at beginning of the test, but not the end",
|
||||||
|
key,
|
||||||
|
) {
|
||||||
|
assert.Equal(
|
||||||
|
t,
|
||||||
|
previousRestartCount,
|
||||||
|
currentRestartCount,
|
||||||
|
"pod namespace/name/container %s has restarted %d times (original count was %d)",
|
||||||
|
key,
|
||||||
|
currentRestartCount,
|
||||||
|
previousRestartCount,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func getRestartCounts(t *testing.T, namespace, labelSelector string) map[string]int32 {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
kubeClient := NewKubernetesClientset(t)
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
pods, err := kubeClient.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: labelSelector})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
restartCounts := make(map[string]int32)
|
||||||
|
for _, pod := range pods.Items {
|
||||||
|
for _, container := range pod.Status.ContainerStatuses {
|
||||||
|
key := fmt.Sprintf("%s/%s/%s", pod.Namespace, pod.Name, container.Name)
|
||||||
|
restartCounts[key] = container.RestartCount
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return restartCounts
|
||||||
|
}
|
||||||
|
@ -180,8 +180,11 @@ func CreateTestWebhookAuthenticator(ctx context.Context, t *testing.T) corev1.Ty
|
|||||||
require.NoErrorf(t, err, "could not cleanup test WebhookAuthenticator %s/%s", webhook.Namespace, webhook.Name)
|
require.NoErrorf(t, err, "could not cleanup test WebhookAuthenticator %s/%s", webhook.Namespace, webhook.Name)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
apiGroup, replacedSuffix := groupsuffix.Replace(auth1alpha1.SchemeGroupVersion.Group, testEnv.APIGroupSuffix)
|
||||||
|
require.True(t, replacedSuffix)
|
||||||
|
|
||||||
return corev1.TypedLocalObjectReference{
|
return corev1.TypedLocalObjectReference{
|
||||||
APIGroup: &auth1alpha1.SchemeGroupVersion.Group,
|
APIGroup: &apiGroup,
|
||||||
Kind: "WebhookAuthenticator",
|
Kind: "WebhookAuthenticator",
|
||||||
Name: webhook.Name,
|
Name: webhook.Name,
|
||||||
}
|
}
|
||||||
@ -250,8 +253,11 @@ func CreateTestJWTAuthenticator(ctx context.Context, t *testing.T, spec auth1alp
|
|||||||
require.NoErrorf(t, err, "could not cleanup test JWTAuthenticator %s/%s", jwtAuthenticator.Namespace, jwtAuthenticator.Name)
|
require.NoErrorf(t, err, "could not cleanup test JWTAuthenticator %s/%s", jwtAuthenticator.Namespace, jwtAuthenticator.Name)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
apiGroup, replacedSuffix := groupsuffix.Replace(auth1alpha1.SchemeGroupVersion.Group, testEnv.APIGroupSuffix)
|
||||||
|
require.True(t, replacedSuffix)
|
||||||
|
|
||||||
return corev1.TypedLocalObjectReference{
|
return corev1.TypedLocalObjectReference{
|
||||||
APIGroup: &auth1alpha1.SchemeGroupVersion.Group,
|
APIGroup: &apiGroup,
|
||||||
Kind: "JWTAuthenticator",
|
Kind: "JWTAuthenticator",
|
||||||
Name: jwtAuthenticator.Name,
|
Name: jwtAuthenticator.Name,
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user