- Bad usernames and passwords aren't really errors, since they are
based on end-user input.
- Other kinds of authentication failures are caused by bad configuration
so still treat those as errors.
- Empty usernames and passwords are already prevented by our endpoint
handler, but just to be safe make sure they cause errors inside the
authenticator too.
- The unit tests for upstreamldap.Provider need to mock the LDAP server,
so add an integration test which allows us to get fast feedback for
this code against a real LDAP server.
- Automatically wrap the user search filter in parenthesis if it is not
already wrapped in parens.
- More special handling for using "dn" as the username or UID attribute
name.
- Also added some more comments to types_ldapidentityprovider.go.tmpl
- Add some fields to LDAPIdentityProvider that we will need to be able
to search for users during login
- Enhance TestSupervisorLogin to test logging in using an upstream LDAP
identity provider. Part of this new test is skipped for now because
we haven't written the corresponding production code to make it
pass yet.
- Some refactoring and enhancement to env.go and the corresponding env
vars to support the new upstream LDAP provider integration tests.
- Use docker.io/bitnami/openldap for our test LDAP server instead of our
own fork now that they have fixed the bug that we reported.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The goal here was to start on an integration test to get us closer to the red
test that we want so we can start working on LDAP.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Rename the test/deploy/dex directory to test/deploy/tools
- Rename the dex namespace to tools
- Add a new ytt value called `pinny_ldap_password` for the tools
ytt templates
- This new value is not used on main at this time. We intend to use
it in the forthcoming ldap branch. We're defining it on main so
that the CI scripts can use it across all branches and PRs.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This test could flake if the load balancer hostname was provisioned but is not yet resolving in DNS from the test process.
The fix is to retry this step for up to 5 minutes.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This test could fail when the cluster was under heavy load. This could cause kubectl to emit "Throttling request took [...]" logs that triggered a failure in the test.
The fix is to ignore these innocuous warnings.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We had this code that printed out pod logs when certain tests failed, but it is a bit cumbersome. We're removing it because we added a CI task that exports all pod logs after every CI run, which accomplishes the same thing and provides us a bunch more data.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We've seen some test flakes caused by this test. Some small changes:
- Use a 30s timeout for each iteration of the test loop (so each iteration needs to check or fail more quickly).
- Log a bit more during the checks so we can diagnose what's going on.
- Increase the overall timeout from one minute to five minutes
Signed-off-by: Matt Moyer <moyerm@vmware.com>
In the case where we are using middleware (e.g., when the api group is
different) in our kubeclient, these error messages have a "...middleware request
for..." bit in the middle.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This test could flake in some rare scenarios. This change adds a bunch of retries, improves the debugging output if the tests fail, and puts all of the subtests in parallel which saves ~10s on my local machine.
Signed-off-by: Matt Moyer <moyerm@vmware.com>