Commit Graph

45 Commits

Author SHA1 Message Date
Joshua Casey
e8490c0244 Do not use long-lived service account tokens in secrets 2023-10-30 23:18:21 -05:00
Ryan Richard
776e436e35 Support building and deploying multi-arch linux amd64 and arm64 images 2023-10-04 08:55:26 -07:00
Ryan Richard
192553aed9 Stop using deprecated critical-pod annotation 2023-09-26 13:16:13 -07:00
Ryan Richard
b564454bab Make Pinniped compatible with Kube clusters which have enabled PSAs
Where possible, use securityContext settings which will work with the
most restrictive Pod Security Admission policy level (as of Kube 1.25).
Where privileged containers are needed, use the namespace-level
annotation to allow them.

Also adjust some integration tests to make similar changes to allow the
integration tests to pass on test clusters which use restricted PSAs.
2022-09-15 14:58:15 -07:00
Monis Khan
0674215ef3
Switch to go.uber.org/zap for JSON formatted logging
Signed-off-by: Monis Khan <mok@vmware.com>
2022-05-24 11:17:42 -04:00
Ryan Richard
0651b9a912 Add toleration for new "control-plane" node label for Concierge deploy 2022-02-22 11:24:26 -08:00
Ryan Richard
ca2cc40769 Add impersonationProxyServerPort to the Concierge's static ConfigMap
- Used to determine on which port the impersonation proxy will bind
- Defaults to 8444, which is the old hard-coded port value
- Allow the port number to be configured to any value within the
  range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
  values file, so while it is possible to change this port without
  needing to recompile, it is not convenient
2021-11-17 13:27:59 -08:00
Ryan Richard
2383a88612 Add aggregatedAPIServerPort to the Concierge's static ConfigMap
- Allow the port number to be configured to any value within the
  range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
  values file, so while it is possible to change this port without
  needing to recompile, it is not convenient
2021-11-16 16:43:51 -08:00
Monis Khan
efaca05999
prevent kapp from altering the selector of our services
This makes it so that our service selector will match exactly the
YAML we specify instead of including an extra "kapp.k14s.io/app" key.
This will take us closer to the standard kubectl behavior which is
desirable since we want to avoid future bugs that only manifest when
kapp is not used.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-09-15 16:08:49 -04:00
Ryan Richard
cec9f3c4d7 Improve the selectors of Deployments and Services
Fixes #801. The solution is complicated by the fact that the Selector
field of Deployments is immutable. It would have been easy to just
make the Selectors of the main Concierge Deployment, the Kube cert agent
Deployment, and the various Services use more specific labels, but
that would break upgrades. Instead, we make the Pod template labels and
the Service selectors more specific, because those not immutable, and
then handle the Deployment selectors in a special way.

For the main Concierge and Supervisor Deployments, we cannot change
their selectors, so they remain "app: app_name", and we make other
changes to ensure that only the intended pods are selected. We keep the
original "app" label on those pods and remove the "app" label from the
pods of the Kube cert agent Deployment. By removing it from the Kube
cert agent pods, there is no longer any chance that they will
accidentally get selected by the main Concierge Deployment.

For the Kube cert agent Deployment, we can change the immutable selector
by deleting and recreating the Deployment. The new selector uses only
the unique label that has always been applied to the pods of that
deployment. Upon recreation, these pods no longer have the "app" label,
so they will not be selected by the main Concierge Deployment's
selector.

The selector of all Services have been updated to use new labels to
more specifically target the intended pods. For the Concierge Services,
this will prevent them from accidentally including the Kube cert agent
pods. For the Supervisor Services, we follow the same convention just
to be consistent and to help future-proof the Supervisor app in case it
ever has a second Deployment added to it.

The selector of the auto-created impersonation proxy Service was
also previously using the "app" label. There is no change to this
Service because that label will now select the correct pods, since
the Kube cert agent pods no longer have that label. It would be possible
to update that selector to use the new more specific label, but then we
would need to invent a way to pass that label into the controller, so
it seemed like more work than was justified.
2021-09-14 13:35:10 -07:00
Matt Moyer
f0a1555aca
Fix broken "read only" fields added in v0.11.0.
These fields were changed as a minor hardening attempt when we switched to Distroless, but I bungled the field names and we never noticed because Kapp doesn't apply API validations.

This change fixes the field names so they act as was originally intended. We should also follow up with a change that validates all of our installation manifest in CI.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-09-02 16:12:39 -05:00
Monis Khan
66ddcf98d3
Provide good defaults for NO_PROXY
This change updates the default NO_PROXY for the supervisor to not
proxy requests to the Kubernetes API and other Kubernetes endpoints
such as Kubernetes services.

It also adds https_proxy and no_proxy settings for the concierge
with the same default.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-08-17 10:03:19 -04:00
Matt Moyer
58bbffded4
Switch to a slimmer distroless base image.
At a high level, it switches us to a distroless base container image, but that also includes several related bits:

- Add a writable /tmp but make the rest of our filesystems read-only at runtime.

- Condense our main server binaries into a single pinniped-server binary. This saves a bunch of space in
  the image due to duplicated library code. The correct behavior is dispatched based on `os.Args[0]`, and
  the `pinniped-server` binary is symlinked to `pinniped-concierge` and `pinniped-supervisor`.

- Strip debug symbols from our binaries. These aren't really useful in a distroless image anyway and all the
  normal stuff you'd expect to work, such as stack traces, still does.

- Add a separate `pinniped-concierge-kube-cert-agent` binary with "sleep" and "print" functionality instead of
  using builtin /bin/sleep and /bin/cat for the kube-cert-agent. This is split from the main server binary
  because the loading/init time of the main server binary was too large for the tiny resource footprint we
  established in our kube-cert-agent PodSpec. Using a separate binary eliminates this issue and the extra
  binary adds only around 1.5MiB of image size.

- Switch the kube-cert-agent code to use a JSON `{"tls.crt": "<b64 cert>", "tls.key": "<b64 key>"}` format.
  This is more robust to unexpected input formatting than the old code, which simply concatenated the files
  with some extra newlines and split on whitespace.

- Update integration tests that made now-invalid assumptions about the `pinniped-server` image.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-08-09 15:05:13 -04:00
Monis Khan
898f2bf942
impersonator: run as a distinct SA with minimal permissions
This change updates the impersonation proxy code to run as a
distinct service account that only has permission to impersonate
identities.  Thus any future vulnerability that causes the
impersonation headers to be dropped will fail closed instead of
escalating to the concierge's default service account which has
significantly more permissions.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-06-11 12:13:53 -04:00
Matt Moyer
5aa08756e0
Fix typo in CredentialIssuer ytt template.
This typo wasn't caught in testing because 1) the Kubernetes API ignores the unknown field and 2) the `type` field defaults to `LoadBalancer` anyway, so things behave as expected.

Even though this doesn't cause any large problems, it's quite confusing.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-06-02 14:48:18 -05:00
Margo Crawford
f330b52076 Update values.yaml to include CredentialIssuer ImpersonationProxy spec. 2021-05-27 13:36:18 -07:00
Matt Moyer
d780bf64bc
Remove references to impersonationConfigMap.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-05-26 15:24:59 -05:00
Margo Crawford
599d70d6dc Wire generatedClusterIPServiceName through from NamesConfig 2021-05-20 14:11:35 -07:00
Matt Moyer
e4dd83887a
Merge remote-tracking branch 'origin/main' into credentialissuer-spec-api 2021-05-20 10:53:53 -05:00
Matt Moyer
657488fe90
Create CredentialIssuer at install, not runtime.
Previously, our controllers would automatically create a CredentialIssuer with a singleton name. The helpers we had for this also used "raw" client access and did not take advantage of the informer cache pattern.

With this change, the CredentialIssuer is always created at install time in the ytt YAML. The controllers now only update the existing CredentialIssuer status, and they do so using the informer cache as much as possible.

This change is targeted at only the kubecertagent controller to start. The impersonatorconfig controller will be updated in a following PR along with other changes.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-05-19 17:15:25 -05:00
Matt Moyer
1a131e64fe
Start deploying an initial CredentialIssuer in our install YAML.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-05-18 11:12:18 -05:00
Matt Moyer
165bef7809
Split out kube-cert-agent service account and bindings.
Followup on the previous comment to split apart the ServiceAccount of the kube-cert-agent and the main concierge pod. This is a bit cleaner and ensures that in testing our main Concierge pod never requires any privileged permissions.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-05-04 10:09:33 -05:00
Matt Moyer
b80cbb8cc5
Run kube-cert-agent pod as Concierge ServiceAccount.
Since 0dfb3e95c5, we no longer directly create the kube-cert-agent Pod, so our "use"
permission on PodSecurityPolicies no longer has the intended effect. Since the deployments controller is now the
one creating pods for us, we need to get the permission on the PodSpec of the target pod instead, which we do somewhat
simply by using the same service account as the main Concierge pods.

We still set `automountServiceAccountToken: false`, so this should not actually give any useful permissions to the
agent pod when running.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2021-05-03 16:20:13 -05:00
Ryan Richard
0b300cbe42 Use TokenCredentialRequest instead of base64 token with impersonator
To make an impersonation request, first make a TokenCredentialRequest
to get a certificate. That cert will either be issued by the Kube
API server's CA or by a new CA specific to the impersonator. Either
way, you can then make a request to the impersonator and present
that client cert for auth and the impersonator will accept it and
make the impesonation call on your behalf.

The impersonator http handler now borrows some Kube library code
to handle request processing. This will allow us to more closely
mimic the behavior of a real API server, e.g. the client cert
auth will work exactly like the real API server.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-03-10 10:30:06 -08:00
Ryan Richard
a75c2194bc Read the names of the impersonation-related resources from the config
They were previously temporarily hardcoded. Now they are set at deploy
time via the static ConfigMap in deployment.yaml
2021-03-02 09:31:24 -08:00
Andrew Keesler
069b3fba37
Merge remote-tracking branch 'upstream/main' into impersonation-proxy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-02-23 12:10:52 -05:00
Monis Khan
abc941097c
Add WhoAmIRequest Aggregated Virtual REST API
This change adds a new virtual aggregated API that can be used by
any user to echo back who they are currently authenticated as.  This
has general utility to end users and can be used in tests to
validate if authentication was successful.

Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-22 20:02:41 -05:00
Mo Khan
a54e1145a5
concierge API service: update groupPriorityMinimum and versionPriority
Copy over values that I have seen used in the past.
Signed-off-by: Monis Khan <mok@vmware.com>
2021-02-19 07:47:38 -05:00
Matt Moyer
1299231a48 Add integration test for impersonation proxy.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
2021-02-03 09:31:30 -08:00
Ryan Richard
616211c1bc
deploy: wire API group suffix through YTT templates
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-19 17:23:06 -05:00
Andrew Keesler
af11d8cd58
Run Tilt images as root for faster reload
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.

  Live Update failed with unexpected error:
    command terminated with exit code 2
  Falling back to a full image build + deploy

Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.

Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.

  Will not perform Live Update because:
    Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
  Falling back to a full image build + deploy

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2021-01-15 11:34:53 -05:00
Margo Crawford
6f04613aed Merge branch 'main' of github.com:vmware-tanzu/pinniped into kubernetes-1.20 2021-01-08 13:22:31 -08:00
Monis Khan
bba0f3a230
Always set an owner ref back to our deployment
This change updates our clients to always set an owner ref when:

1. The operation is a create
2. The object does not already have an owner ref set

Signed-off-by: Monis Khan <mok@vmware.com>
2021-01-07 15:25:40 -05:00
Andrew Keesler
724c0d3eb0
Add YTT template value for setting log level
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.

See a5643e3 for addition of log level.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-11 09:01:38 -05:00
Matt Moyer
c451604816
Merge pull request #182 from mattmoyer/more-renames
Rename more APIs before we cut a release with longer-term API compatibility
2020-11-02 18:34:26 -06:00
Ryan Richard
05cf56a0fa
Merge pull request #180 from vmware-tanzu/limits
Add CPU/memory limits to our deployments
2020-11-02 16:22:37 -08:00
Ryan Richard
05233963fb Add CPU requests and limits to the Concierge and Supervisor deployments 2020-11-02 15:47:20 -08:00
Matt Moyer
59263ea733
Rename CredentialIssuerConfig to CredentialIssuer.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-11-02 17:39:42 -06:00
Ryan Richard
781f86d18c
deploy: add memory limits
This is the beginning of a change to add cpu/memory limits to our pods.
We are doing this because some consumers require this, and it is generally
a good practice.

The limits == requests for "Guaranteed" QoS.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 14:57:39 -05:00
Andrew Keesler
fcea48c8f9
Run as non-root
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-11-02 12:51:15 -05:00
Matt Moyer
f0320dfbd8
Rename login API to login.concierge.pinniped.dev.
This is the first of a few related changes that re-organize our API after the big recent changes that introduced the supervisor component.

Signed-off-by: Matt Moyer <moyerm@vmware.com>
2020-10-30 09:58:28 -05:00
Ryan Richard
94f20e57b1 Concierge controllers add labels to all created resources 2020-10-15 10:14:23 -07:00
Ryan Richard
1301018655 Support installing concierge and supervisor into existing namespace
- New optional ytt value called `into_namespace` means install into that
  preexisting namespace rather than creating a new namespace for each app
- Also ensure that every resource that is created statically by our yaml
  at install-time by either app is labeled consistently
- Also support adding custom labels to all of those resources from a
  new ytt value called `custom_labels`
2020-10-14 15:05:42 -07:00
Ryan Richard
b71959961d Merge branch 'main' into supervisor-with-discovery 2020-10-09 10:00:50 -07:00
Ryan Richard
f5a6a0bb1e Move all three deployment dirs under a new top-level deploy/ dir 2020-10-09 10:00:22 -07:00