Commit Graph

227 Commits

Author SHA1 Message Date
Ryan Richard
904086cbec fix a typo in some comments 2021-03-22 09:34:58 -07:00
Matt Moyer
077aa8a42e
Fix a copy-paste typo in the ImpersonationProxyInfo JSON field name.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-03-12 13:24:05 -06:00
Ryan Richard
0b300cbe42 Use TokenCredentialRequest instead of base64 token with impersonator
To make an impersonation request, first make a TokenCredentialRequest
to get a certificate. That cert will either be issued by the Kube
API server's CA or by a new CA specific to the impersonator. Either
way, you can then make a request to the impersonator and present
that client cert for auth and the impersonator will accept it and
make the impesonation call on your behalf.

The impersonator http handler now borrows some Kube library code
to handle request processing. This will allow us to more closely
mimic the behavior of a real API server, e.g. the client cert
auth will work exactly like the real API server.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-03-10 10:30:06 -08:00
Ryan Richard
f0fc84c922 Add new allowed values to field validations on CredentialIssuer
The new values are used by the impersonation proxy's status.
2021-03-03 12:53:41 -08:00
Ryan Richard
454f35ccd6 Edit a comment on a type and run codegen 2021-03-02 16:52:23 -08:00
Matt Moyer
60f92d5fe2
Merge branch 'main' of github.com:vmware-tanzu/pinniped into impersonation-proxy
This is more than an automatic merge. It also includes a rewrite of the CredentialIssuer API impersonation proxy fields using the new structure, and updates to the CLI to account for that new API.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-03-02 16:06:19 -06:00
Matt Moyer
7174f857d8
Add generated code.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-03-02 13:09:25 -06:00
Ryan Richard
a75c2194bc Read the names of the impersonation-related resources from the config
They were previously temporarily hardcoded. Now they are set at deploy
time via the static ConfigMap in deployment.yaml
2021-03-02 09:31:24 -08:00
Ryan Richard
f8111db5ff Merge branch 'main' into impersonation-proxy 2021-02-25 14:50:40 -08:00
Matt Moyer
7be8927d5e
Add generated code for new CredentialIssuer API fields.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-24 10:47:06 -06:00
Matt Moyer
7a1d92a8d4
Restructure docs into new layout.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-23 11:11:07 -06:00
Andrew Keesler
069b3fba37
Merge remote-tracking branch 'upstream/main' into impersonation-proxy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-23 12:10:52 -05:00
Monis Khan
abc941097c
Add WhoAmIRequest Aggregated Virtual REST API
This change adds a new virtual aggregated API that can be used by
any user to echo back who they are currently authenticated as.  This
has general utility to end users and can be used in tests to
validate if authentication was successful.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-22 20:02:41 -05:00
Mo Khan
a54e1145a5
concierge API service: update groupPriorityMinimum and versionPriority
Copy over values that I have seen used in the past.
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-19 07:47:38 -05:00
Ryan Richard
f5fedbb6b2 Add Service resource "delete" permission to Concierge RBAC
- Because the impersonation proxy config controller needs to be able
  to delete the load balancer which it created

Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-18 11:00:22 -08:00
Ryan Richard
5cd60fa5f9 Move starting/stopping impersonation proxy server to a new controller
- Watch a configmap to read the configuration of the impersonation
  proxy and reconcile it.
- Implements "auto" mode by querying the API for control plane nodes.
- WIP: does not create a load balancer or proper TLS certificates yet.
  Those will come in future commits.

Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-11 17:25:52 -08:00
Andrew Keesler
9b87906a30
Merge remote-tracking branch 'upstream/main' into impersonation-proxy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-11 11:03:33 -05:00
Monis Khan
96cec59236
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
de88ae2f61
Fix status related RBAC
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
dd3d1c8b1b
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
741b8fe88d
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:08 -05:00
Monis Khan
89b00e3702
Declare war on namespaces
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Monis Khan
d2480e6300
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Monis Khan
4205e3dedc
Make concierge APIs cluster scoped
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Ryan Richard
e4c49c37b9 Merge branch 'main' into impersonation-proxy 2021-02-09 13:45:37 -08:00
Andrew Keesler
2ae631b603
deploy/concierge: add RBAC for flowschemas and prioritylevelconfigurations
As of upgrading to Kubernetes 1.20, our aggregated API server nows runs some
controllers for the two flowcontrol.apiserver.k8s.io resources in the title of
this commit, so it needs RBAC to read them.

This should get rid of the following error messages in our Concierge logs:
  Failed to watch *v1beta1.FlowSchema: failed to list *v1beta1.FlowSchema: flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "flowschemas" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope
  Failed to watch *v1beta1.PriorityLevelConfiguration: failed to list *v1beta1.PriorityLevelConfiguration: prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "prioritylevelconfigurations" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-05 08:19:12 -05:00
Margo Crawford
23e8c35918 Revert "CredentialIssuer contains Impersonation Proxy spec"
This reverts commit 83bbd1fa9314508030ea9fcf26c6720212d65dc0.
2021-02-03 09:37:39 -08:00
Margo Crawford
ab60396ac4 CredentialIssuer contains Impersonation Proxy spec 2021-02-03 09:37:39 -08:00
Matt Moyer
1299231a48 Add integration test for impersonation proxy.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-03 09:31:30 -08:00
Margo Crawford
b6abb022f6 Add initial implementation of impersonation proxy.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-03 09:31:13 -08:00
Matt Moyer
1ceef5874e
Clean up docs using https://get.pinniped.dev redirects.
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-01-28 10:15:39 -06:00
Ryan Richard
616211c1bc
deploy: wire API group suffix through YTT templates
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-19 17:23:06 -05:00
Andrew Keesler
af11d8cd58
Run Tilt images as root for faster reload
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.

  Live Update failed with unexpected error:
    command terminated with exit code 2
  Falling back to a full image build + deploy

Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.

Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.

  Will not perform Live Update because:
    Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
  Falling back to a full image build + deploy

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-15 11:34:53 -05:00
Margo Crawford
6f04613aed Merge branch 'main' of github.com:vmware-tanzu/pinniped into kubernetes-1.20 2021-01-08 13:22:31 -08:00
Monis Khan
bba0f3a230
Always set an owner ref back to our deployment
This change updates our clients to always set an owner ref when:

1. The operation is a create
2. The object does not already have an owner ref set

Signed-off-by: Monis Khan <mok@vmware.com>
2021-01-07 15:25:40 -05:00
Matt Moyer
e0b94f4780
Move our main image references to the VMware Harbor registry.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-17 17:51:09 -06:00
Margo Crawford
196e43aa48 Rename off of main
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-12-16 14:27:09 -08:00
Matt Moyer
7dae166a69
Merge branch 'main' into username-and-subject-claims 2020-12-16 15:23:19 -06:00
Andrew Keesler
095ba14cc8
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-16 15:40:34 -05:00
Ryan Richard
dcb19150fc Nest claim configs one level deeper in JWTAuthenticatorSpec
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2020-12-16 09:42:19 -08:00
Matt Moyer
404ff93102
Fix documentation comment for the UpstreamOIDCProvider's spec.client.secretName type.
The value is correctly validated as `secrets.pinniped.dev/oidc-client` elsewhere, only this comment was wrong.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-15 21:52:07 -06:00
Ryan Richard
40c6a67631 Merge branch 'main' into username-and-subject-claims 2020-12-15 18:09:44 -08:00
Ryan Richard
0e60c93cef Add UsernameClaim and GroupsClaim to JWTAuthenticator CRD spec
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2020-12-15 10:36:19 -08:00
Andrew Keesler
2e784e006c
Merge remote-tracking branch 'upstream/main' into secret-generation 2020-12-15 13:24:33 -05:00
Andrew Keesler
50f9b434e7
SameIssuerHostMustUseSameSecret is a valid OIDCProvider status
I saw this message in our CI logs, which led me to this fix.
  could not update status: OIDCProvider.config.supervisor.pinniped.dev "acceptance-provider" is invalid: status.status: Unsupported value: "SameIssuerHostMustUseSameSecret": supported values: "Success", "Duplicate", "Invalid"

Also - correct an integration test error message that was misleading.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 11:53:53 -05:00
Andrew Keesler
82ae98d9d0
Set secret names on OIDCProvider status field
We believe this API is more forwards compatible with future secrets management
use cases. The implementation is a cry for help, but I was trying to follow the
previously established pattern of encapsulating the secret generation
functionality to a single group of packages.

This commit makes a breaking change to the current OIDCProvider API, but that
OIDCProvider API was added after the latest release, so it is technically still
in development until we release, and therefore we can continue to thrash on it.

I also took this opportunity to make some things private that didn't need to be
public.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-15 09:13:01 -05:00
Andrew Keesler
e17bc31b29
Pass CSRF cookie signing key from controller to cache
This also sets the CSRF cookie Secret's OwnerReference to the Pod's grandparent
Deployment so that when the Deployment is cleaned up, then the Secret is as
well.

Obviously this controller implementation has a lot of issues, but it will at
least get us started.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-11 11:49:27 -05:00
Andrew Keesler
946b0539d2
Add JWTAuthenticator API type
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-08 15:41:48 -05:00
Rajat Goyal
31810a97e1 Remove duplicate docs from the repo and change all links to point to the Hugo site 2020-12-02 23:58:19 +05:30
Mo Khan
c32e452db8
Add nonroot SCC to work on OpenShift clusters 2020-11-18 17:08:45 -05:00
Matt Moyer
e867fb82b9
Add spec.tls field to UpstreamOIDCProvider API.
This allows for a custom CA bundle to be used when connecting to the upstream issuer.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-16 20:23:20 -06:00
Matt Moyer
d3d8ef44a0
Make more fields in UpstreamOIDCProvider optional.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 15:28:37 -06:00
Matt Moyer
2e7d869ccc
Add generated API/client code for new UpstreamOIDCProvider CRD.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:50 -06:00
Matt Moyer
bac3c19bec
Add UpstreamOIDCProvider API type definition.
This is essentially just a copy of Andrew's work from https://github.com/vmware-tanzu/pinniped/pull/135.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-13 11:38:49 -06:00
Matt Moyer
7f2c43cd62
Put all of our APIs into a "pinniped" category, and never use "all".
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.

I also added some tests to assert these properties on all `*.pinniped.dev` API resources.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-12 16:26:34 -06:00
Andrew Keesler
724c0d3eb0
Add YTT template value for setting log level
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.

See a5643e3 for addition of log level.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-11 09:01:38 -05:00
Ryan Richard
1223cf7877
Merge pull request #154 from vmware-tanzu/change_release_static_yaml_names
Rename static yaml files in release process
2020-11-02 17:09:11 -08:00
Matt Moyer
c451604816
Merge pull request #182 from mattmoyer/more-renames
Rename more APIs before we cut a release with longer-term API compatibility
2020-11-02 18:34:26 -06:00
Ryan Richard
05cf56a0fa
Merge pull request #180 from vmware-tanzu/limits
Add CPU/memory limits to our deployments
2020-11-02 16:22:37 -08:00
Matt Moyer
2bf5c8b48b
Replace the OIDCProvider field SNICertificateSecretName with a TLS.SecretName field.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 18:15:03 -06:00
Ryan Richard
05233963fb Add CPU requests and limits to the Concierge and Supervisor deployments 2020-11-02 15:47:20 -08:00
Matt Moyer
2b8773aa54
Rename OIDCProviderConfig to OIDCProvider.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:40:39 -06:00
Matt Moyer
59263ea733
Rename CredentialIssuerConfig to CredentialIssuer.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:39:42 -06:00
Matt Moyer
b13a8075e4
Merge pull request #183 from vmware-tanzu/non-root
Run as non-root
2020-11-02 17:39:14 -06:00
Matt Moyer
935577f8e7
Give the concierge access to use any PodSecurityPolicy.
This is needed on clusters with PodSecurityPolicy enabled by default, but should be harmless in other cases.

This is generally needed because a restrictive PodSecurityPolicy will usually otherwise prevent the `hostPath` volume mount needed by the dynamically-created cert agent pod.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 15:10:00 -06:00
Ryan Richard
781f86d18c
deploy: add memory limits
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.

The limits == requests for "Guaranteed" QoS.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 14:57:39 -05:00
Andrew Keesler
fcea48c8f9
Run as non-root
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 12:51:15 -05:00
Matt Moyer
9e1922f1ed
Split the config CRDs into two API groups.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 19:22:46 -05:00
Ryan Richard
01f4fdb5c3 Remove namespace from a ClusterRoleBinding, which are not namespaced 2020-10-30 16:10:04 -07:00
Matt Moyer
0f25657a35
Rename WebhookIdentityProvider to WebhookAuthenticator.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 15:11:53 -05:00
Matt Moyer
e69183aa8a
Rename idp.concierge.pinniped.dev to authentication.concierge.pinniped.dev.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 14:07:40 -05:00
Matt Moyer
81390bba89
Rename idp.pinniped.dev to idp.concierge.pinniped.dev.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 14:07:39 -05:00
Matt Moyer
f0320dfbd8
Rename login API to login.concierge.pinniped.dev.
This is the first of a few related changes that re-organize our API after the big recent changes that introduced the supervisor component.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 09:58:28 -05:00
Ryan Richard
059b6e885f Allow ytt templating of the loadBalancerIP for the supervisor 2020-10-28 16:45:23 -07:00
Ryan Richard
01dddd3cae Add some docs for configuring supervisor TLS
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-28 13:42:02 -07:00
Ryan Richard
29e0ce5662 Configure name of the supervisor default TLS cert secret via ConfigMap
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-28 11:56:50 -07:00
Ryan Richard
eeb110761e Rename secretName to SNICertificateSecretName in OIDCProviderConfig 2020-10-26 17:25:45 -07:00
Ryan Richard
8b7c30cfbd Supervisor listens for HTTPS on port 443 with configurable TLS certs
- TLS certificates can be configured on the OIDCProviderConfig using
  the `secretName` field.
- When listening for incoming TLS connections, choose the TLS cert
  based on the SNI hostname of the incoming request.
- Because SNI hostname information on incoming requests does not include
  the port number of the request, we add a validation that
  OIDCProviderConfigs where the issuer hostnames (not including port
  number) are the same must use the same `secretName`.
- Note that this approach does not yet support requests made to an
  IP address instead of a hostname. Also note that `localhost` is
  considered a hostname by SNI.
- Add port 443 as a container port to the pod spec.
- A new controller watches for TLS secrets and caches them in memory.
  That same in-memory cache is used while servicing incoming connections
  on the TLS port.
- Make it easy to configure both port 443 and/or port 80 for various
  Service types using our ytt templates for the supervisor.
- When deploying to kind, add another nodeport and forward it to the
  host on another port to expose our new HTTPS supervisor port to the
  host.
2020-10-26 17:03:26 -07:00
Ryan Richard
25a91019c2 Add spec.secretName to OPC and handle case-insensitive hostnames
- When two different Issuers have the same host (i.e. they differ
  only by path) then they must have the same secretName. This is because
  it wouldn't make sense for there to be two different TLS certificates
  for one host. Find any that do not have the same secret name to
  put an error status on them and to avoid serving OIDC endpoints for
  them. The host comparison is case-insensitive.
- Issuer hostnames should be treated as case-insensitive, because
  DNS hostnames are case-insensitive. So https://me.com and
  https://mE.cOm are duplicate issuers. However, paths are
  case-sensitive, so https://me.com/A and https://me.com/a are
  different issuers. Fixed this in the issuer validations and in the
  OIDC Manager's request router logic.
2020-10-23 16:25:44 -07:00
Andrew Keesler
f928ef4752 Also mention using a service mesh is an option for supervisor ingress
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-23 10:23:17 -07:00
Ryan Richard
eafdef7b11 Add docs for creating an Ingress for the Supervisor
Note that some of these new docs mention things that will not be
implemented until we finish the next story.
2020-10-22 16:57:50 -07:00
Ryan Richard
397ec61e57 Specify the supervisor NodePort Service's port and nodePort separately
When using kind we forward the node's port to the host, so we only
really care about the `nodePort` value. For acceptance clusters,
we put an Ingress in front of a NodePort Service, so we only really
care about the `port` value.
2020-10-22 15:37:35 -07:00
Ryan Richard
122f7cffdb Make the supervisor healthz endpoint public
Based on our experiences today with GKE, it will be easier for our users
to configure Ingress health checks if the healthz endpoint is available
on the same public port as the OIDC endpoints.

Also add an integration test for the healthz endpoint now that it is
public.

Also add the optional `containers[].ports.containerPort` to the
supervisor Deployment because the GKE docs say that GKE will look
at that field while inferring how to invoke the health endpoint. See
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc
2020-10-21 15:24:58 -07:00
Andrew Keesler
fa5f653de6 Implement readinessProbe and livenessProbe for supervisor
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-21 11:51:31 -07:00
Andrew Keesler
617c5608ca Supervisor controllers apply custom labels to JWKS secrets
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-15 12:40:56 -07:00
Ryan Richard
f8e461dfc3 Merge branch 'main' into label_every_resource 2020-10-15 10:19:03 -07:00
Ryan Richard
94f20e57b1 Concierge controllers add labels to all created resources 2020-10-15 10:14:23 -07:00
Ryan Richard
1301018655 Support installing concierge and supervisor into existing namespace
- New optional ytt value called `into_namespace` means install into that
  preexisting namespace rather than creating a new namespace for each app
- Also ensure that every resource that is created statically by our yaml
  at install-time by either app is labeled consistently
- Also support adding custom labels to all of those resources from a
  new ytt value called `custom_labels`
2020-10-14 15:05:42 -07:00
Andrew Keesler
6aed025c79
supervisor-generate-key: initial spike
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-14 09:47:34 -04:00
Andrew Keesler
3d5937a8e8
deploy/supervisor: type: eaxmple -> example
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-14 09:22:15 -04:00
Ryan Richard
478b0a0fd8 Add supervisor yaml and rename concierge yaml in release process
Add install-pinniped-supervisor.yaml and rename install-pinniped.yaml
to install-pinniped-concierge.yaml in the release process and
installation/demo documentation.
2020-10-12 09:43:52 -07:00
Ryan Richard
171f3ed906 Add some docs for how to configure the Supervisor app after installing 2020-10-09 16:28:34 -07:00
Ryan Richard
354b922e48 Allow creation of different Service types in Supervisor ytt templates
- Tiltfile and prepare-for-integration-tests.sh both specify the
  NodePort Service using `--data-value-yaml 'service_nodeport_port=31234'`
- Also rename the namespaces used by the Concierge and Supervisor apps
  during integration tests running locally
2020-10-09 16:00:11 -07:00
Ryan Richard
34549b779b Make tilt work with the supervisor app and add more uninstall testing
- Also continue renaming things related to the concierge app
- Enhance the uninstall test to also test uninstalling the supervisor
  and local-user-authenticator apps
2020-10-09 14:25:34 -07:00
Ryan Richard
b71959961d Merge branch 'main' into supervisor-with-discovery 2020-10-09 10:00:50 -07:00
Ryan Richard
f5a6a0bb1e Move all three deployment dirs under a new top-level deploy/ dir 2020-10-09 10:00:22 -07:00
Andrew Keesler
c555c14ccb
supervisor-oidc: add OIDCProviderConfig.Status.LastUpdateTime
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-10-09 11:54:50 -04:00
Ryan Richard
a4389562e3 Fix mistake in deployment.yaml where service selector was hardcoded 2020-10-08 16:20:21 -07:00
Andrew Keesler
da00fc708f
supervisor-oidc: checkpoint: add status to provider CRD
Signed-off-by: Ryan Richard <richardry@vmware.com>
2020-10-08 13:27:45 -04:00
Andrew Keesler
20ce142f90
Merge remote-tracking branch 'upstream/main' into supervisor-with-discovery 2020-10-07 11:37:33 -04:00