- For now, build the test-webhook binary in the same container image as
the pinniped-server binary, to make it easier to distribute
- Also fix lots of bugs from the first draft of the test-webhook's
`/authenticate` implementation from the previous commit
- Add a detailed README for the new deploy-test-webhook directory
- We are not setting an upper limit because Kubernetes might randomly
decide to unschedule our pod in ways that we can't anticipate in
advance, causing very hard to reproduce production bugs.
- We noticed that our app currently uses ~30 MB of memory when idle,
and ~35 MB of memory under some load. So a memory request of 128
MB should be reasonable.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Indicate the success or failure of the cluster signing key strategy
- Also introduce the concept of "capabilities" of an integration test
cluster to allow the integration tests to be run against clusters
that do or don't allow the borrowing of the cluster signing key
- Tests that are not expected to pass on clusters that lack the
borrowing of the signing key capability are now ignored by
calling the new library.SkipUnlessClusterHasCapability test helper
- Rename library.Getenv to library.GetEnv
- Add copyrights where they were missing
The rotation is forced by a new controller that deletes the serving cert
secret, as other controllers will see this deletion and ensure that a new
serving cert is created.
Note that the integration tests now have an addition worst case runtime of
60 seconds. This is because of the way that the aggregated API server code
reloads certificates. We will fix this in a future story. Then, the
integration tests should hopefully get much faster.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This switches us back to an approach where we use the Pod "exec" API to grab the keys we need, rather than forcing our code to run on the control plane node. It will help us fail gracefully (or dynamically switch to alternate implementations) when the cluster is not self-hosted.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Co-authored-by: Ryan Richard <richardry@vmware.com>
- Call the auto-generated /healthz endpoint of our aggregated API server
- Use http for liveness even though tcp seems like it might be
more appropriate, because tcp probes cause TLS handshake errors
to appear in our logs every few seconds
- Use conservative timeouts and retries on the liveness probe to avoid
having our container get restarted when it is temporarily slow due
to running in an environment under resource pressure
- Use less conservative timeouts and retries for the readiness probe
to remove an unhealthy pod from the service less conservatively than
restarting the container
- Tuning the settings for retries and timeouts seem to be a mysterious
art, so these are just a first draft
- We want to follow the <noun>Request convention.
- The actual operation does not login a user, but it does retrieve a
credential with which they can login.
- This commit includes changes to all LoginRequest-related symbols and
constants to try to update their names to follow the new
CredentialRequest type.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- For high availability reasons, we would like our app to scale linearly
with the size of the control plane. Using a DaemonSet allows us to run
one pod on each node-role.kubernetes.io/master node.
- The hope is that the Service that we create should load balance
between these pods appropriately.
- Refactors the existing cert generation code into controllers
which read and write a Secret containing the certs
- Does not add any new functionality yet, e.g. no new handling
for cert expiration, and no leader election to allow for
multiple servers running simultaneously
- This commit also doesn't add new tests for the cert generation
code, but it should be more unit testable now as controllers
- Previously the golang code would create a Service and an APIService.
The APIService would be given an owner reference which pointed to
the namespace in which the app was installed.
- This prevented the app from being uninstalled. The namespace would
refuse to delete, so `kapp delete` or `kubectl delete` would fail.
- The new approach is to statically define the Service and an APIService
in the deployment.yaml, except for the caBundle of the APIService.
Then the golang code will perform an update to add the caBundle at
runtime.
- When the user uses `kapp deploy` or `kubectl apply` either tool will
notice that the caBundle is not declared in the yaml and will
therefore avoid editing that field.
- When the user uses `kapp delete` or `kubectl delete` either tool will
destroy the objects because they are statically declared with names
in the yaml, just like all of the other objects. There are no
ownerReferences used, so nothing should prevent the namespace from
being deleted.
- This approach also allows us to have less golang code to maintain.
- In the future, if our golang controllers want to dynamically add
an Ingress or other objects, they can still do that. An Ingress
would point to our statically defined Service as its backend.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Why? Because the discovery URL is already there in the kubeconfig; let's
not make our lives more complicated by passing it in via an env var.
- Also allow for ytt callers to not specify data.values.discovery_url - there
are going to be a non-trivial number of installers of placeholder-name
that want to use the server URL found in the cluster-info ConfigMap.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Seems like the next step is to allow override of the CA bundle; I didn't
do that here for simplicity of the commit, but seems like it is the right
thing to do in the future.
- Also includes bumping the api and client-go dependencies to the newer
version which also moved LoginDiscoveryConfig to the
crds.placeholder.suzerain-io.github.io group in the generated code