Commit Graph

71 Commits

Author SHA1 Message Date
Margo Crawford
1c5da35527 Merge remote-tracking branch 'origin' into active-directory-identity-provider 2021-08-18 12:44:12 -07:00
Margo Crawford
90e6298e29 Update text on CRD templates to reflect new defaults 2021-08-18 10:39:01 -07:00
Monis Khan
66ddcf98d3
Provide good defaults for NO_PROXY
This change updates the default NO_PROXY for the supervisor to not
proxy requests to the Kubernetes API and other Kubernetes endpoints
such as Kubernetes services.

It also adds https_proxy and no_proxy settings for the concierge
with the same default.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-08-17 10:03:19 -04:00
Matt Moyer
58bbffded4
Switch to a slimmer distroless base image.
At a high level, it switches us to a distroless base container image, but that also includes several related bits:

- Add a writable /tmp but make the rest of our filesystems read-only at runtime.

- Condense our main server binaries into a single pinniped-server binary. This saves a bunch of space in
  the image due to duplicated library code. The correct behavior is dispatched based on `os.Args[0]`, and
  the `pinniped-server` binary is symlinked to `pinniped-concierge` and `pinniped-supervisor`.

- Strip debug symbols from our binaries. These aren't really useful in a distroless image anyway and all the
  normal stuff you'd expect to work, such as stack traces, still does.

- Add a separate `pinniped-concierge-kube-cert-agent` binary with "sleep" and "print" functionality instead of
  using builtin /bin/sleep and /bin/cat for the kube-cert-agent. This is split from the main server binary
  because the loading/init time of the main server binary was too large for the tiny resource footprint we
  established in our kube-cert-agent PodSpec. Using a separate binary eliminates this issue and the extra
  binary adds only around 1.5MiB of image size.

- Switch the kube-cert-agent code to use a JSON `{"tls.crt": "<b64 cert>", "tls.key": "<b64 key>"}` format.
  This is more robust to unexpected input formatting than the old code, which simply concatenated the files
  with some extra newlines and split on whitespace.

- Update integration tests that made now-invalid assumptions about the `pinniped-server` image.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-08-09 15:05:13 -04:00
Margo Crawford
00978c15f7 Update wording for ActiveDirectoryIdentityProvider crd 2021-07-23 13:01:41 -07:00
Ryan Richard
aaa4861373 Custom API Group overlay for AD
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-07-23 13:01:40 -07:00
Margo Crawford
be6f9f83ce RBAC rules for activedirectoryidentityprovider 2021-07-23 13:01:40 -07:00
Margo Crawford
8fb35c6569 Active Directory cli options 2021-07-23 13:01:40 -07:00
Margo Crawford
b06de69f6a ActiveDirectoryIdentityProvider
- Create CRD
- Create implementation of AD-specific user search defaults
2021-07-23 13:01:40 -07:00
Ryan Richard
f1e63c55d4 Add https_proxy and no_proxy settings for the Supervisor
- Add new optional ytt params for the Supervisor deployment.
- When the Supervisor is making calls to an upstream OIDC provider,
  use these variables if they were provided.
- These settings are integration tested in the main CI pipeline by
  sometimes setting them on deployments in certain cases, and then
  letting the existing integration tests (e.g. TestE2EFullIntegration)
  provide the coverage, so there are no explicit changes to the
  integration tests themselves in this commit.
2021-07-07 12:50:13 -07:00
Ryan Richard
cedbe82bbb Default groupSearch.attributes.groupName to "dn" instead of "cn"
- DNs are more unique than CNs, so it feels like a safer default
2021-05-28 13:27:11 -07:00
Ryan Richard
3e1e8880f7 Initial support for upstream LDAP group membership
Reflect the upstream group membership into the Supervisor's
downstream tokens, so they can be added to the user's
identity on the workload clusters.

LDAP group search is configurable on the
LDAPIdentityProvider resource.
2021-05-17 11:10:26 -07:00
Ryan Richard
36819989a3 Remove DryRunAuthenticationUsername from LDAPIdentityProviderSpec
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-04-28 14:26:57 -07:00
Ryan Richard
263a33cc85 Some updates based on PR review 2021-04-27 12:43:09 -07:00
Ryan Richard
e9d5743845 Add authentication dry run validation to LDAPIdentityProvider
Also force the LDAP server pod to restart whenever the LDIF file
changes, so whenever you redeploy the tools deployment with a new test
user password the server will be updated.
2021-04-16 14:04:05 -07:00
Ryan Richard
6bba529b10 RBAC rules for ldapidentityproviders to grant permissions to controller 2021-04-13 17:26:53 -07:00
Ryan Richard
fec3d92f26 Add integration test for upstreamldap.Provider
- The unit tests for upstreamldap.Provider need to mock the LDAP server,
  so add an integration test which allows us to get fast feedback for
  this code against a real LDAP server.
- Automatically wrap the user search filter in parenthesis if it is not
  already wrapped in parens.
- More special handling for using "dn" as the username or UID attribute
  name.
- Also added some more comments to types_ldapidentityprovider.go.tmpl
2021-04-13 15:23:14 -07:00
Ryan Richard
25c1f0d523 Add Conditions to LDAPIdentityProvider's Status and start to fill them
- The ldap_upstream_watcher.go controller validates the bind secret and
  uses the Conditions to report errors. Shares some condition reporting
  logic with its sibling controller oidc_upstream_watcher.go, to the
  extent which is convenient without generics in golang.
2021-04-12 13:53:21 -07:00
Ryan Richard
1c55c857f4 Start to fill out LDAPIdentityProvider's fields and TestSupervisorLogin
- Add some fields to LDAPIdentityProvider that we will need to be able
  to search for users during login
- Enhance TestSupervisorLogin to test logging in using an upstream LDAP
  identity provider. Part of this new test is skipped for now because
  we haven't written the corresponding production code to make it
  pass yet.
- Some refactoring and enhancement to env.go and the corresponding env
  vars to support the new upstream LDAP provider integration tests.
- Use docker.io/bitnami/openldap for our test LDAP server instead of our
  own fork now that they have fixed the bug that we reported.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-04-07 12:56:09 -07:00
Ryan Richard
2b6859b161
Add stub LDAP API type and integration test
The goal here was to start on an integration test to get us closer to the red
test that we want so we can start working on LDAP.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-04-06 13:10:01 -04:00
Ryan Richard
904086cbec fix a typo in some comments 2021-03-22 09:34:58 -07:00
Matt Moyer
7a1d92a8d4
Restructure docs into new layout.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-23 11:11:07 -06:00
Monis Khan
de88ae2f61
Fix status related RBAC
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
dd3d1c8b1b
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Matt Moyer
1ceef5874e
Clean up docs using https://get.pinniped.dev redirects.
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-01-28 10:15:39 -06:00
Ryan Richard
616211c1bc
deploy: wire API group suffix through YTT templates
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-19 17:23:06 -05:00
Andrew Keesler
af11d8cd58
Run Tilt images as root for faster reload
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.

  Live Update failed with unexpected error:
    command terminated with exit code 2
  Falling back to a full image build + deploy

Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.

Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.

  Will not perform Live Update because:
    Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
  Falling back to a full image build + deploy

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-15 11:34:53 -05:00
Matt Moyer
e0b94f4780
Move our main image references to the VMware Harbor registry.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-17 17:51:09 -06:00
Margo Crawford
196e43aa48 Rename off of main
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-12-16 14:27:09 -08:00
Andrew Keesler
095ba14cc8
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-16 15:40:34 -05:00
Matt Moyer
404ff93102
Fix documentation comment for the UpstreamOIDCProvider's spec.client.secretName type.
The value is correctly validated as `secrets.pinniped.dev/oidc-client` elsewhere, only this comment was wrong.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-15 21:52:07 -06:00
Andrew Keesler
2e784e006c
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-15 13:24:33 -05:00
Andrew Keesler
50f9b434e7
SameIssuerHostMustUseSameSecret is a valid OIDCProvider status
I saw this message in our CI logs, which led me to this fix.
  could not update status: OIDCProvider.config.supervisor.pinniped.dev "acceptance-provider" is invalid: status.status: Unsupported value: "SameIssuerHostMustUseSameSecret": supported values: "Success", "Duplicate", "Invalid"

Also - correct an integration test error message that was misleading.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 11:53:53 -05:00
Andrew Keesler
82ae98d9d0
Set secret names on OIDCProvider status field
We believe this API is more forwards compatible with future secrets management
use cases. The implementation is a cry for help, but I was trying to follow the
previously established pattern of encapsulating the secret generation
functionality to a single group of packages.

This commit makes a breaking change to the current OIDCProvider API, but that
OIDCProvider API was added after the latest release, so it is technically still
in development until we release, and therefore we can continue to thrash on it.

I also took this opportunity to make some things private that didn't need to be
public.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 09:13:01 -05:00
Andrew Keesler
e17bc31b29
Pass CSRF cookie signing key from controller to cache
This also sets the CSRF cookie Secret's OwnerReference to the Pod's grandparent
Deployment so that when the Deployment is cleaned up, then the Secret is as
well.

Obviously this controller implementation has a lot of issues, but it will at
least get us started.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-11 11:49:27 -05:00
Matt Moyer
e867fb82b9
Add spec.tls field to UpstreamOIDCProvider API.
This allows for a custom CA bundle to be used when connecting to the upstream issuer.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-16 20:23:20 -06:00
Matt Moyer
d3d8ef44a0
Make more fields in UpstreamOIDCProvider optional.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 15:28:37 -06:00
Matt Moyer
2e7d869ccc
Add generated API/client code for new UpstreamOIDCProvider CRD.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:50 -06:00
Matt Moyer
bac3c19bec
Add UpstreamOIDCProvider API type definition.
This is essentially just a copy of Andrew's work from https://github.com/vmware-tanzu/pinniped/pull/135.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:49 -06:00
Matt Moyer
7f2c43cd62
Put all of our APIs into a "pinniped" category, and never use "all".
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.

I also added some tests to assert these properties on all `*.pinniped.dev` API resources.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-12 16:26:34 -06:00
Andrew Keesler
724c0d3eb0
Add YTT template value for setting log level
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.

See a5643e3 for addition of log level.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-11 09:01:38 -05:00
Ryan Richard
1223cf7877
Merge pull request #154 from vmware-tanzu/change_release_static_yaml_names
Rename static yaml files in release process
2020-11-02 17:09:11 -08:00
Matt Moyer
c451604816
Merge pull request #182 from mattmoyer/more-renames
Rename more APIs before we cut a release with longer-term API compatibility
2020-11-02 18:34:26 -06:00
Ryan Richard
05cf56a0fa
Merge pull request #180 from vmware-tanzu/limits
Add CPU/memory limits to our deployments
2020-11-02 16:22:37 -08:00
Matt Moyer
2bf5c8b48b
Replace the OIDCProvider field SNICertificateSecretName with a TLS.SecretName field.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 18:15:03 -06:00
Ryan Richard
05233963fb Add CPU requests and limits to the Concierge and Supervisor deployments 2020-11-02 15:47:20 -08:00
Matt Moyer
2b8773aa54
Rename OIDCProviderConfig to OIDCProvider.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:40:39 -06:00
Ryan Richard
781f86d18c
deploy: add memory limits
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.

The limits == requests for "Guaranteed" QoS.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 14:57:39 -05:00
Andrew Keesler
fcea48c8f9
Run as non-root
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 12:51:15 -05:00
Matt Moyer
9e1922f1ed
Split the config CRDs into two API groups.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 19:22:46 -05:00