This change updates the default NO_PROXY for the supervisor to not
proxy requests to the Kubernetes API and other Kubernetes endpoints
such as Kubernetes services.
It also adds https_proxy and no_proxy settings for the concierge
with the same default.
Signed-off-by: Monis Khan <mok@vmware.com>
- Add `AllowPasswordGrant` boolean field to OIDCIdentityProvider's spec
- The oidc upstream watcher controller copies the value of
`AllowPasswordGrant` into the configuration of the cached provider
- Add password grant to the UpstreamOIDCIdentityProviderI interface
which is implemented by the cached provider instance for use in the
authorization endpoint
- Enhance the IDP discovery endpoint to return the supported "flows"
for each IDP ("cli_password" and/or "browser_authcode")
- Enhance `pinniped get kubeconfig` to help the user choose the desired
flow for the selected IDP, and to write the flow into the resulting
kubeconfg
- Enhance `pinniped login oidc` to have a flow flag to tell it which
client-side flow it should use for auth (CLI-based or browser-based)
- In the Dex config, allow the resource owner password grant, which Dex
implements to also return ID tokens, for use in integration tests
- Enhance the authorize endpoint to perform password grant when
requested by the incoming headers. This commit does not include unit
tests for the enhancements to the authorize endpoint, which will come
in the next commit
- Extract some shared helpers from the callback endpoint to share the
code with the authorize endpoint
- Add new integration tests
At a high level, it switches us to a distroless base container image, but that also includes several related bits:
- Add a writable /tmp but make the rest of our filesystems read-only at runtime.
- Condense our main server binaries into a single pinniped-server binary. This saves a bunch of space in
the image due to duplicated library code. The correct behavior is dispatched based on `os.Args[0]`, and
the `pinniped-server` binary is symlinked to `pinniped-concierge` and `pinniped-supervisor`.
- Strip debug symbols from our binaries. These aren't really useful in a distroless image anyway and all the
normal stuff you'd expect to work, such as stack traces, still does.
- Add a separate `pinniped-concierge-kube-cert-agent` binary with "sleep" and "print" functionality instead of
using builtin /bin/sleep and /bin/cat for the kube-cert-agent. This is split from the main server binary
because the loading/init time of the main server binary was too large for the tiny resource footprint we
established in our kube-cert-agent PodSpec. Using a separate binary eliminates this issue and the extra
binary adds only around 1.5MiB of image size.
- Switch the kube-cert-agent code to use a JSON `{"tls.crt": "<b64 cert>", "tls.key": "<b64 key>"}` format.
This is more robust to unexpected input formatting than the old code, which simply concatenated the files
with some extra newlines and split on whitespace.
- Update integration tests that made now-invalid assumptions about the `pinniped-server` image.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Add new optional ytt params for the Supervisor deployment.
- When the Supervisor is making calls to an upstream OIDC provider,
use these variables if they were provided.
- These settings are integration tested in the main CI pipeline by
sometimes setting them on deployments in certain cases, and then
letting the existing integration tests (e.g. TestE2EFullIntegration)
provide the coverage, so there are no explicit changes to the
integration tests themselves in this commit.
Reflect the upstream group membership into the Supervisor's
downstream tokens, so they can be added to the user's
identity on the workload clusters.
LDAP group search is configurable on the
LDAPIdentityProvider resource.
Also force the LDAP server pod to restart whenever the LDIF file
changes, so whenever you redeploy the tools deployment with a new test
user password the server will be updated.
- The unit tests for upstreamldap.Provider need to mock the LDAP server,
so add an integration test which allows us to get fast feedback for
this code against a real LDAP server.
- Automatically wrap the user search filter in parenthesis if it is not
already wrapped in parens.
- More special handling for using "dn" as the username or UID attribute
name.
- Also added some more comments to types_ldapidentityprovider.go.tmpl
- The ldap_upstream_watcher.go controller validates the bind secret and
uses the Conditions to report errors. Shares some condition reporting
logic with its sibling controller oidc_upstream_watcher.go, to the
extent which is convenient without generics in golang.
- Add some fields to LDAPIdentityProvider that we will need to be able
to search for users during login
- Enhance TestSupervisorLogin to test logging in using an upstream LDAP
identity provider. Part of this new test is skipped for now because
we haven't written the corresponding production code to make it
pass yet.
- Some refactoring and enhancement to env.go and the corresponding env
vars to support the new upstream LDAP provider integration tests.
- Use docker.io/bitnami/openldap for our test LDAP server instead of our
own fork now that they have fixed the bug that we reported.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The goal here was to start on an integration test to get us closer to the red
test that we want so we can start working on LDAP.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.
Live Update failed with unexpected error:
command terminated with exit code 2
Falling back to a full image build + deploy
Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.
Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.
Will not perform Live Update because:
Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
Falling back to a full image build + deploy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The value is correctly validated as `secrets.pinniped.dev/oidc-client` elsewhere, only this comment was wrong.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I saw this message in our CI logs, which led me to this fix.
could not update status: OIDCProvider.config.supervisor.pinniped.dev "acceptance-provider" is invalid: status.status: Unsupported value: "SameIssuerHostMustUseSameSecret": supported values: "Success", "Duplicate", "Invalid"
Also - correct an integration test error message that was misleading.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
We believe this API is more forwards compatible with future secrets management
use cases. The implementation is a cry for help, but I was trying to follow the
previously established pattern of encapsulating the secret generation
functionality to a single group of packages.
This commit makes a breaking change to the current OIDCProvider API, but that
OIDCProvider API was added after the latest release, so it is technically still
in development until we release, and therefore we can continue to thrash on it.
I also took this opportunity to make some things private that didn't need to be
public.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This also sets the CSRF cookie Secret's OwnerReference to the Pod's grandparent
Deployment so that when the Deployment is cleaned up, then the Secret is as
well.
Obviously this controller implementation has a lot of issues, but it will at
least get us started.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.
I also added some tests to assert these properties on all `*.pinniped.dev` API resources.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.
See a5643e3 for addition of log level.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.
The limits == requests for "Guaranteed" QoS.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>