Commit Graph

56 Commits

Author SHA1 Message Date
Ryan Richard
a75c2194bc Read the names of the impersonation-related resources from the config
They were previously temporarily hardcoded. Now they are set at deploy
time via the static ConfigMap in deployment.yaml
2021-03-02 09:31:24 -08:00
Ryan Richard
f8111db5ff Merge branch 'main' into impersonation-proxy 2021-02-25 14:50:40 -08:00
Matt Moyer
7be8927d5e
Add generated code for new CredentialIssuer API fields.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-24 10:47:06 -06:00
Matt Moyer
7a1d92a8d4
Restructure docs into new layout.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-23 11:11:07 -06:00
Andrew Keesler
069b3fba37
Merge remote-tracking branch 'upstream/main' into impersonation-proxy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-23 12:10:52 -05:00
Monis Khan
abc941097c
Add WhoAmIRequest Aggregated Virtual REST API
This change adds a new virtual aggregated API that can be used by
any user to echo back who they are currently authenticated as.  This
has general utility to end users and can be used in tests to
validate if authentication was successful.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-22 20:02:41 -05:00
Mo Khan
a54e1145a5
concierge API service: update groupPriorityMinimum and versionPriority
Copy over values that I have seen used in the past.
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-19 07:47:38 -05:00
Ryan Richard
f5fedbb6b2 Add Service resource "delete" permission to Concierge RBAC
- Because the impersonation proxy config controller needs to be able
  to delete the load balancer which it created

Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-18 11:00:22 -08:00
Ryan Richard
5cd60fa5f9 Move starting/stopping impersonation proxy server to a new controller
- Watch a configmap to read the configuration of the impersonation
  proxy and reconcile it.
- Implements "auto" mode by querying the API for control plane nodes.
- WIP: does not create a load balancer or proper TLS certificates yet.
  Those will come in future commits.

Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-11 17:25:52 -08:00
Andrew Keesler
9b87906a30
Merge remote-tracking branch 'upstream/main' into impersonation-proxy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-11 11:03:33 -05:00
Monis Khan
96cec59236
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
de88ae2f61
Fix status related RBAC
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
dd3d1c8b1b
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:09 -05:00
Monis Khan
741b8fe88d
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:08 -05:00
Monis Khan
89b00e3702
Declare war on namespaces
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Monis Khan
d2480e6300
Generated
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Monis Khan
4205e3dedc
Make concierge APIs cluster scoped
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-10 21:52:07 -05:00
Ryan Richard
e4c49c37b9 Merge branch 'main' into impersonation-proxy 2021-02-09 13:45:37 -08:00
Andrew Keesler
2ae631b603
deploy/concierge: add RBAC for flowschemas and prioritylevelconfigurations
As of upgrading to Kubernetes 1.20, our aggregated API server nows runs some
controllers for the two flowcontrol.apiserver.k8s.io resources in the title of
this commit, so it needs RBAC to read them.

This should get rid of the following error messages in our Concierge logs:
  Failed to watch *v1beta1.FlowSchema: failed to list *v1beta1.FlowSchema: flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "flowschemas" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope
  Failed to watch *v1beta1.PriorityLevelConfiguration: failed to list *v1beta1.PriorityLevelConfiguration: prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "prioritylevelconfigurations" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-05 08:19:12 -05:00
Margo Crawford
23e8c35918 Revert "CredentialIssuer contains Impersonation Proxy spec"
This reverts commit 83bbd1fa9314508030ea9fcf26c6720212d65dc0.
2021-02-03 09:37:39 -08:00
Margo Crawford
ab60396ac4 CredentialIssuer contains Impersonation Proxy spec 2021-02-03 09:37:39 -08:00
Matt Moyer
1299231a48 Add integration test for impersonation proxy.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-03 09:31:30 -08:00
Margo Crawford
b6abb022f6 Add initial implementation of impersonation proxy.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-02-03 09:31:13 -08:00
Matt Moyer
1ceef5874e
Clean up docs using https://get.pinniped.dev redirects.
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-01-28 10:15:39 -06:00
Ryan Richard
616211c1bc
deploy: wire API group suffix through YTT templates
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-19 17:23:06 -05:00
Andrew Keesler
af11d8cd58
Run Tilt images as root for faster reload
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.

  Live Update failed with unexpected error:
    command terminated with exit code 2
  Falling back to a full image build + deploy

Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.

Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.

  Will not perform Live Update because:
    Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
  Falling back to a full image build + deploy

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-15 11:34:53 -05:00
Margo Crawford
6f04613aed Merge branch 'main' of github.com:vmware-tanzu/pinniped into kubernetes-1.20 2021-01-08 13:22:31 -08:00
Monis Khan
bba0f3a230
Always set an owner ref back to our deployment
This change updates our clients to always set an owner ref when:

1. The operation is a create
2. The object does not already have an owner ref set

Signed-off-by: Monis Khan <mok@vmware.com>
2021-01-07 15:25:40 -05:00
Matt Moyer
e0b94f4780
Move our main image references to the VMware Harbor registry.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-12-17 17:51:09 -06:00
Ryan Richard
dcb19150fc Nest claim configs one level deeper in JWTAuthenticatorSpec
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2020-12-16 09:42:19 -08:00
Ryan Richard
0e60c93cef Add UsernameClaim and GroupsClaim to JWTAuthenticator CRD spec
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2020-12-15 10:36:19 -08:00
Andrew Keesler
946b0539d2
Add JWTAuthenticator API type
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-12-08 15:41:48 -05:00
Mo Khan
c32e452db8
Add nonroot SCC to work on OpenShift clusters 2020-11-18 17:08:45 -05:00
Matt Moyer
7f2c43cd62
Put all of our APIs into a "pinniped" category, and never use "all".
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.

I also added some tests to assert these properties on all `*.pinniped.dev` API resources.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-12 16:26:34 -06:00
Andrew Keesler
724c0d3eb0
Add YTT template value for setting log level
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.

See a5643e3 for addition of log level.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-11 09:01:38 -05:00
Ryan Richard
1223cf7877
Merge pull request #154 from vmware-tanzu/change_release_static_yaml_names
Rename static yaml files in release process
2020-11-02 17:09:11 -08:00
Matt Moyer
c451604816
Merge pull request #182 from mattmoyer/more-renames
Rename more APIs before we cut a release with longer-term API compatibility
2020-11-02 18:34:26 -06:00
Ryan Richard
05cf56a0fa
Merge pull request #180 from vmware-tanzu/limits
Add CPU/memory limits to our deployments
2020-11-02 16:22:37 -08:00
Ryan Richard
05233963fb Add CPU requests and limits to the Concierge and Supervisor deployments 2020-11-02 15:47:20 -08:00
Matt Moyer
59263ea733
Rename CredentialIssuerConfig to CredentialIssuer.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:39:42 -06:00
Matt Moyer
b13a8075e4
Merge pull request #183 from vmware-tanzu/non-root
Run as non-root
2020-11-02 17:39:14 -06:00
Matt Moyer
935577f8e7
Give the concierge access to use any PodSecurityPolicy.
This is needed on clusters with PodSecurityPolicy enabled by default, but should be harmless in other cases.

This is generally needed because a restrictive PodSecurityPolicy will usually otherwise prevent the `hostPath` volume mount needed by the dynamically-created cert agent pod.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 15:10:00 -06:00
Ryan Richard
781f86d18c
deploy: add memory limits
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.

The limits == requests for "Guaranteed" QoS.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 14:57:39 -05:00
Andrew Keesler
fcea48c8f9
Run as non-root
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 12:51:15 -05:00
Matt Moyer
9e1922f1ed
Split the config CRDs into two API groups.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 19:22:46 -05:00
Ryan Richard
01f4fdb5c3 Remove namespace from a ClusterRoleBinding, which are not namespaced 2020-10-30 16:10:04 -07:00
Matt Moyer
0f25657a35
Rename WebhookIdentityProvider to WebhookAuthenticator.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 15:11:53 -05:00
Matt Moyer
e69183aa8a
Rename idp.concierge.pinniped.dev to authentication.concierge.pinniped.dev.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 14:07:40 -05:00
Matt Moyer
81390bba89
Rename idp.pinniped.dev to idp.concierge.pinniped.dev.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 14:07:39 -05:00
Matt Moyer
f0320dfbd8
Rename login API to login.concierge.pinniped.dev.
This is the first of a few related changes that re-organize our API after the big recent changes that introduced the supervisor component.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 09:58:28 -05:00