This fixes a regression introduced by 24c4bc0dd4. It could occasionally cause the tests to fail when run on a machine with an IPv6 localhost interface. As a fix I added a wrapper for the new Go 1.15 `LookupIP()` method, and created a partially-functional backport for Go 1.14. This should be easy to delete in the future.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This adds a few new "create test object" helpers and extends `CreateTestOIDCProvider()` to optionally wait for the created OIDCProvider to enter some expected status condition.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Note that this WIP commit includes a failing unit test, which will
be addressed in the next commit
Signed-off-by: Ryan Richard <richardry@vmware.com>
Mainly, avoid using some `testing` helpers that were added in 1.14, as well as a couple of other niceties we can live without.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We were assuming that env.SupervisorHTTPAddress was set, but it might not be
depending on the environment on which the integration tests are being run. For
example, in our acceptance environments, we don't currently set
env.SupervisorHTTPAddress.
I tried to follow the pattern from TestSupervisorOIDCDiscovery here.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Prior to this we re-used the CLI testing client to test the authorize flow of the supervisor, but they really need to be separate upstream clients. For example, the supervisor client should be a non-public client with a client secret and a different callback endpoint.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This will allow it to be imported by Go code outside of our repository, which was something we have planned for since this code was written.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Before, we did this in an init container, which meant if the Dex pod restarted we would have fresh certs, but our Tilt/bash setup didn't account for this.
Now, the certs are generated by a Job which runs once and saves the generated files into a Secret. This should be a bit more stable.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This change deploys a small Squid-based proxy into the `dex` namespace in our integration test environment. This lets us use the cluster-local DNS name (`http://dex.dex.svc.cluster.local/dex`) as the OIDC issuer. It will make generating certificates easier, and most importantly it will mean that our CLI can see Dex at the same name/URL as the supervisor running inside the cluster.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.
I also added some tests to assert these properties on all `*.pinniped.dev` API resources.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I tried to follow a principle of encapsulation here - we can still default to
peeps making connections to 80/443 on a Service object, but internally we will
use 8080/8443.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This is the first of a few related changes that re-organize our API after the big recent changes that introduced the supervisor component.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Setting a Secret in the supervisor's namespace with a special name
will cause it to get picked up and served as the supervisor's TLS
cert for any request which does not have a matching SNI cert.
- This is especially useful for when there is no DNS record for an
issuer and the user will be accessing it via IP address. This
is not how we would expect it to be used in production, but it
might be useful for other cases.
- Includes a new integration test
- Also suppress all of the warnings about ignoring the error returned by
Close() in lines like `defer x.Close()` to make GoLand happier
Based on our experiences today with GKE, it will be easier for our users
to configure Ingress health checks if the healthz endpoint is available
on the same public port as the OIDC endpoints.
Also add an integration test for the healthz endpoint now that it is
public.
Also add the optional `containers[].ports.containerPort` to the
supervisor Deployment because the GKE docs say that GKE will look
at that field while inferring how to invoke the health endpoint. See
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc
- Not used by any of our integration test clusters yet
- Planning to use it later for the kind clusters and maybe for
the acceptance clusters too (although the acceptance clusters might
not need to use self-signed certs so maybe not)
- It didn't matter before because it would be cleaned up by a
t.Cleanup() function, but now that we might loop twice it will matter
during the second time through the loop
EC keys are smaller and take less time to generate. Our integration
tests were super flakey because generating an RSA key would take up to
10 seconds *gasp*. The main token verifier that we care about is
Kubernetes, which supports P256, so hopefully it won't be that much of
an issue that our default signing key type is EC. The OIDC spec seems
kinda squirmy when it comes to using non-RSA signing algorithms...
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I brought this over because I copied code from work in flight on
another branch. But now the other branch doesn't use this package.
No use bringing on another dependency if we can avoid it.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This will be used for other types of "capabilities" of the test environment besides just those of the test cluster, such as those of an upstream OIDC provider.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Also continue renaming things related to the concierge app
- Enhance the uninstall test to also test uninstalling the supervisor
and local-user-authenticator apps
- Variables specific to concierge add it to their name
- All variables now start with `PINNIPED_TEST_` which makes it clear
that they are for tests and also helps them not conflict with the
env vars that are used in the Pinniped CLI code
- The OIDCProviderConfigWatcherController synchronizes the
OIDCProviderConfig settings to dynamically mount and unmount the
OIDC discovery endpoints for each provider
- Integration test passes but unit tests need to be added still
- Intended to be a red test in this commit; will make it go
green in a future commit
- Enhance env.go and prepare-for-integration-tests.sh to make it
possible to write integration tests for the supervisor app
by setting more env vars and by exposing the service to the kind
host on a localhost port
- Add `--clean` option to prepare-for-integration-tests.sh
to make it easier to start fresh
- Make prepare-for-integration-tests.sh advise you to run
`go test -v -count 1 ./test/integration` because this does
not buffer the test output
- Make concierge_api_discovery_test.go pass by adding expectations
for the new OIDCProviderConfig type
This will hopefully come in handy later if we ever decide to add
support for multiple OIDC providers as a part of one supervisor.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I saw this helper function the other day and wondered if we could use it.
It does indeed look like it does what we want, because when I run this code,
I get `...User "system:anonymous" cannot get resource...`.
c := library.NewAnonymousPinnipedClientset(t)
_, err := c.
ConfigV1alpha1().
CredentialIssuerConfigs("integration").
Get(context.Background(), "pinniped-config", metav1.GetOptions{})
t.Log(err)
I also ran a similar test using this new helper in the context of
library.NewClientsetWithCertAndKey(). Seemed to get us what we want.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This change replaces our previous test helpers for checking cluster capabilities and passing external test parameters. Prior to this change, we always used `$PINNIPED_*` environment variables and these variables were accessed throughout the test code.
The new code introduces a more strongly-typed `TestEnv` structure and helpers which load and expose the parameters. Tests can now call `env := library.IntegrationEnv(t)`, then access parameters such as `env.Namespace` or `env.TestUser.Token`. This should make this data dependency easier to manage and refactor in the future. In many ways this is just an extended version of the previous cluster capabilities YAML.
Tests can also check for cluster capabilities easily by using `env := library.IntegrationEnv(t).WithCapability(xyz)`.
The actual parameters are still loaded from OS environment variables by default (for compatibility), but the code now also tries to load the data from a Kubernetes Secret (`integration/pinniped-test-env` by default). I'm hoping this will be a more convenient way to pass data between various scripts than the local `/tmp` directory. I hope to remove the OS environment code in a future commit.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This seems to fail on CI when the Concourse workers get slow and
kind stops working reliably. It would be interesting to see the
error message in that case to figure out if there's anything we
could do to make the test more resilient.
Simplifies the implementation, makes it more consistent with other
updaters of the cic (CredentialIssuerConfig), and also retries on
update conflicts
Signed-off-by: Ryan Richard <richardry@vmware.com>
- Only inject things through the constructor that the controller
will need
- Use pkg private constants when possible for things that are not
actually configurable by the user
- Make the agent pod template private to the pkg
- Introduce a test helper to reduce some duplicated test code
- Remove some `it.Focus` lines that were accidentally committed, and
repair the broken tests that they were hiding
- Lots of TODOs added that need to be resolved to finish this WIP
- execer_test.go seems like it should be passing, but it fails (sigh)
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Annotations do not have this restriction, so we can put it there instead. This only currently occurs on clusters without the cluster signing capability (GKE).
Signed-off-by: Matt Moyer <moyerm@vmware.com>
New resource naming conventions:
- Do not repeat the Kind in the name,
e.g. do not call it foo-cluster-role-binding, just call it foo
- Names will generally start with a prefix to identify our component,
so when a user lists all objects of that kind, they can tell to which
component it is related,
e.g. `kubectl get configmaps` would list one named "pinniped-config"
- It should be possible for an operator to make the word "pinniped"
mostly disappear if they choose, by specifying the app_name in
values.yaml, to the extent that is practical (but not from APIService
names because those are hardcoded in golang)
- Each role/clusterrole and its corresponding binding have the same name
- Pinniped resource names that must be known by the server golang code
are passed to the code at run time via ConfigMap, rather than
hardcoded in the golang code. This also allows them to be prepended
with the app_name from values.yaml while creating the ConfigMap.
- Since the CLI `get-kubeconfig` command cannot guess the name of the
CredentialIssuerConfig resource in advance anymore, it lists all
CredentialIssuerConfig in the app's namespace and returns an error
if there is not exactly one found, and then uses that one regardless
of its name
I also started updating the script to deploy the test-webhook instead of
doing TMC stuff. I think the script should live in this repo so that
Pinniped contributors only need to worry about one repo for running
integration tests.
There are a bunch of TODOs in the script, but I figured this was a good
checkpoint. The script successfully runs on my machine and sets up the
test-webhook and pinniped on a local kind cluster. The integration tests
are failing because of some issue with pinniped talking to the test-webhook,
but this is step in the right direction.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
It looks like requests to our aggregated API service on GKE vacillate
between success and failure until they reach a converged successful
state. I think this has to do with our pods updating the API serving
cert at different times. If only one pod updates its serving cert to
the correct value, then it should respond with success. However, the
other pod would respond with failure. Depending on the load balancing
algorithm that GKE uses to send traffic to pods in a service, we could
end up with a success that we interpret as "all pods have rotated
their certs" when it really just means "at least one pod has rotated
its certs."
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This should simplify our build/test setup quite a bit, since it means we have only a single module (at the top level) with all hand-written code. I'll leave `module.sh` alone for now but we may be able to simplify that a bit more.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We were using this at one point to control which tests ran with `go test ./...`, but now we're also using the `-short` flag to differentiate unit vs. integration tests.
Hopefully this will simplify things a bit.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
kubectl pulls these in in their main package...I wonder if we should do
the same for our main packages?
Signed-off-by: Andrew Keesler <akeesler@vmware.com>