Commit Graph

97 Commits

Author SHA1 Message Date
Matt Moyer 1ceef5874e
Clean up docs using https://get.pinniped.dev redirects.
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-01-28 10:15:39 -06:00
Ryan Richard 616211c1bc
deploy: wire API group suffix through YTT templates
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-19 17:23:06 -05:00
Andrew Keesler af11d8cd58
Run Tilt images as root for faster reload
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.

  Live Update failed with unexpected error:
    command terminated with exit code 2
  Falling back to a full image build + deploy

Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.

Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.

  Will not perform Live Update because:
    Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
  Falling back to a full image build + deploy

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-15 11:34:53 -05:00
Matt Moyer e0b94f4780
Move our main image references to the VMware Harbor registry.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-17 17:51:09 -06:00
Margo Crawford 196e43aa48 Rename off of main
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-12-16 14:27:09 -08:00
Andrew Keesler 095ba14cc8
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-16 15:40:34 -05:00
Matt Moyer 404ff93102
Fix documentation comment for the UpstreamOIDCProvider's spec.client.secretName type.
The value is correctly validated as `secrets.pinniped.dev/oidc-client` elsewhere, only this comment was wrong.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-15 21:52:07 -06:00
Andrew Keesler 2e784e006c
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-15 13:24:33 -05:00
Andrew Keesler 50f9b434e7
SameIssuerHostMustUseSameSecret is a valid OIDCProvider status
I saw this message in our CI logs, which led me to this fix.
  could not update status: OIDCProvider.config.supervisor.pinniped.dev "acceptance-provider" is invalid: status.status: Unsupported value: "SameIssuerHostMustUseSameSecret": supported values: "Success", "Duplicate", "Invalid"

Also - correct an integration test error message that was misleading.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 11:53:53 -05:00
Andrew Keesler 82ae98d9d0
Set secret names on OIDCProvider status field
We believe this API is more forwards compatible with future secrets management
use cases. The implementation is a cry for help, but I was trying to follow the
previously established pattern of encapsulating the secret generation
functionality to a single group of packages.

This commit makes a breaking change to the current OIDCProvider API, but that
OIDCProvider API was added after the latest release, so it is technically still
in development until we release, and therefore we can continue to thrash on it.

I also took this opportunity to make some things private that didn't need to be
public.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 09:13:01 -05:00
Andrew Keesler e17bc31b29
Pass CSRF cookie signing key from controller to cache
This also sets the CSRF cookie Secret's OwnerReference to the Pod's grandparent
Deployment so that when the Deployment is cleaned up, then the Secret is as
well.

Obviously this controller implementation has a lot of issues, but it will at
least get us started.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-11 11:49:27 -05:00
Matt Moyer e867fb82b9
Add `spec.tls` field to UpstreamOIDCProvider API.
This allows for a custom CA bundle to be used when connecting to the upstream issuer.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-16 20:23:20 -06:00
Matt Moyer d3d8ef44a0
Make more fields in UpstreamOIDCProvider optional.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 15:28:37 -06:00
Matt Moyer 2e7d869ccc
Add generated API/client code for new UpstreamOIDCProvider CRD.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:50 -06:00
Matt Moyer bac3c19bec
Add UpstreamOIDCProvider API type definition.
This is essentially just a copy of Andrew's work from https://github.com/vmware-tanzu/pinniped/pull/135.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:49 -06:00
Matt Moyer 7f2c43cd62
Put all of our APIs into a "pinniped" category, and never use "all".
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.

I also added some tests to assert these properties on all `*.pinniped.dev` API resources.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-12 16:26:34 -06:00
Andrew Keesler 724c0d3eb0
Add YTT template value for setting log level
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.

See a5643e3 for addition of log level.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-11 09:01:38 -05:00
Ryan Richard 1223cf7877
Merge pull request #154 from vmware-tanzu/change_release_static_yaml_names
Rename static yaml files in release process
2020-11-02 17:09:11 -08:00
Matt Moyer c451604816
Merge pull request #182 from mattmoyer/more-renames
Rename more APIs before we cut a release with longer-term API compatibility
2020-11-02 18:34:26 -06:00
Ryan Richard 05cf56a0fa
Merge pull request #180 from vmware-tanzu/limits
Add CPU/memory limits to our deployments
2020-11-02 16:22:37 -08:00
Matt Moyer 2bf5c8b48b
Replace the OIDCProvider field SNICertificateSecretName with a TLS.SecretName field.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 18:15:03 -06:00
Ryan Richard 05233963fb Add CPU requests and limits to the Concierge and Supervisor deployments 2020-11-02 15:47:20 -08:00
Matt Moyer 2b8773aa54
Rename OIDCProviderConfig to OIDCProvider.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:40:39 -06:00
Ryan Richard 781f86d18c
deploy: add memory limits
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.

The limits == requests for "Guaranteed" QoS.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 14:57:39 -05:00
Andrew Keesler fcea48c8f9
Run as non-root
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 12:51:15 -05:00
Matt Moyer 9e1922f1ed
Split the config CRDs into two API groups.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 19:22:46 -05:00
Ryan Richard 059b6e885f Allow ytt templating of the `loadBalancerIP` for the supervisor 2020-10-28 16:45:23 -07:00
Ryan Richard 01dddd3cae Add some docs for configuring supervisor TLS
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-28 13:42:02 -07:00
Ryan Richard 29e0ce5662 Configure name of the supervisor default TLS cert secret via ConfigMap
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-28 11:56:50 -07:00
Ryan Richard eeb110761e Rename `secretName` to `SNICertificateSecretName` in OIDCProviderConfig 2020-10-26 17:25:45 -07:00
Ryan Richard 8b7c30cfbd Supervisor listens for HTTPS on port 443 with configurable TLS certs
- TLS certificates can be configured on the OIDCProviderConfig using
  the `secretName` field.
- When listening for incoming TLS connections, choose the TLS cert
  based on the SNI hostname of the incoming request.
- Because SNI hostname information on incoming requests does not include
  the port number of the request, we add a validation that
  OIDCProviderConfigs where the issuer hostnames (not including port
  number) are the same must use the same `secretName`.
- Note that this approach does not yet support requests made to an
  IP address instead of a hostname. Also note that `localhost` is
  considered a hostname by SNI.
- Add port 443 as a container port to the pod spec.
- A new controller watches for TLS secrets and caches them in memory.
  That same in-memory cache is used while servicing incoming connections
  on the TLS port.
- Make it easy to configure both port 443 and/or port 80 for various
  Service types using our ytt templates for the supervisor.
- When deploying to kind, add another nodeport and forward it to the
  host on another port to expose our new HTTPS supervisor port to the
  host.
2020-10-26 17:03:26 -07:00
Ryan Richard 25a91019c2 Add `spec.secretName` to OPC and handle case-insensitive hostnames
- When two different Issuers have the same host (i.e. they differ
  only by path) then they must have the same secretName. This is because
  it wouldn't make sense for there to be two different TLS certificates
  for one host. Find any that do not have the same secret name to
  put an error status on them and to avoid serving OIDC endpoints for
  them. The host comparison is case-insensitive.
- Issuer hostnames should be treated as case-insensitive, because
  DNS hostnames are case-insensitive. So https://me.com and
  https://mE.cOm are duplicate issuers. However, paths are
  case-sensitive, so https://me.com/A and https://me.com/a are
  different issuers. Fixed this in the issuer validations and in the
  OIDC Manager's request router logic.
2020-10-23 16:25:44 -07:00
Andrew Keesler f928ef4752 Also mention using a service mesh is an option for supervisor ingress
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-23 10:23:17 -07:00
Ryan Richard eafdef7b11 Add docs for creating an Ingress for the Supervisor
Note that some of these new docs mention things that will not be
implemented until we finish the next story.
2020-10-22 16:57:50 -07:00
Ryan Richard 397ec61e57 Specify the supervisor NodePort Service's `port` and `nodePort` separately
When using kind we forward the node's port to the host, so we only
really care about the `nodePort` value. For acceptance clusters,
we put an Ingress in front of a NodePort Service, so we only really
care about the `port` value.
2020-10-22 15:37:35 -07:00
Ryan Richard 122f7cffdb Make the supervisor healthz endpoint public
Based on our experiences today with GKE, it will be easier for our users
to configure Ingress health checks if the healthz endpoint is available
on the same public port as the OIDC endpoints.

Also add an integration test for the healthz endpoint now that it is
public.

Also add the optional `containers[].ports.containerPort` to the
supervisor Deployment because the GKE docs say that GKE will look
at that field while inferring how to invoke the health endpoint. See
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc
2020-10-21 15:24:58 -07:00
Andrew Keesler fa5f653de6 Implement readinessProbe and livenessProbe for supervisor
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-21 11:51:31 -07:00
Andrew Keesler 617c5608ca Supervisor controllers apply custom labels to JWKS secrets
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-15 12:40:56 -07:00
Ryan Richard f8e461dfc3 Merge branch 'main' into label_every_resource 2020-10-15 10:19:03 -07:00
Ryan Richard 94f20e57b1 Concierge controllers add labels to all created resources 2020-10-15 10:14:23 -07:00
Ryan Richard 1301018655 Support installing concierge and supervisor into existing namespace
- New optional ytt value called `into_namespace` means install into that
  preexisting namespace rather than creating a new namespace for each app
- Also ensure that every resource that is created statically by our yaml
  at install-time by either app is labeled consistently
- Also support adding custom labels to all of those resources from a
  new ytt value called `custom_labels`
2020-10-14 15:05:42 -07:00
Andrew Keesler 6aed025c79
supervisor-generate-key: initial spike
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-14 09:47:34 -04:00
Andrew Keesler 3d5937a8e8
deploy/supervisor: type: eaxmple -> example
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-14 09:22:15 -04:00
Ryan Richard 478b0a0fd8 Add supervisor yaml and rename concierge yaml in release process
Add install-pinniped-supervisor.yaml and rename install-pinniped.yaml
to install-pinniped-concierge.yaml in the release process and
installation/demo documentation.
2020-10-12 09:43:52 -07:00
Ryan Richard 171f3ed906 Add some docs for how to configure the Supervisor app after installing 2020-10-09 16:28:34 -07:00
Ryan Richard 354b922e48 Allow creation of different Service types in Supervisor ytt templates
- Tiltfile and prepare-for-integration-tests.sh both specify the
  NodePort Service using `--data-value-yaml 'service_nodeport_port=31234'`
- Also rename the namespaces used by the Concierge and Supervisor apps
  during integration tests running locally
2020-10-09 16:00:11 -07:00
Ryan Richard f5a6a0bb1e Move all three deployment dirs under a new top-level `deploy/` dir 2020-10-09 10:00:22 -07:00