For password based login on the CLI (i.e. no browser), this change
relaxes the response code check to allow for any redirect code
handled by the Go standard library. In the future, we can drop the
rewriteStatusSeeOtherToStatusFoundForBrowserless logic from the
server side code.
Signed-off-by: Monis Khan <mok@vmware.com>
Highlights from this dep bump:
1. Made a copy of the v0.4.0 github.com/go-logr/stdr implementation
for use in tests. We must bump this dep as Kube code uses a
newer version now. We would have to rewrite hundreds of test log
assertions without this copy.
2. Use github.com/felixge/httpsnoop to undo the changes made by
ory/fosite#636 for CLI based login flows. This is required for
backwards compatibility with older versions of our CLI. A
separate change after this will update the CLI to be more
flexible (it is purposefully not part of this change to confirm
that we did not break anything). For all browser login flows, we
now redirect using http.StatusSeeOther instead of http.StatusFound.
3. Drop plog.RemoveKlogGlobalFlags as klog no longer mutates global
process flags
4. Only bump github.com/ory/x to v0.0.297 instead of the latest
v0.0.321 because v0.0.298+ pulls in a newer version of
go.opentelemetry.io/otel/semconv which breaks k8s.io/apiserver.
We should update k8s.io/apiserver to use the newer code.
5. Migrate all code from k8s.io/apimachinery/pkg/util/clock to
k8s.io/utils/clock and k8s.io/utils/clock/testing
6. Delete testutil.NewDeleteOptionsRecorder and migrate to the new
kubetesting.NewDeleteActionWithOptions
7. Updated ExpectedAuthorizeCodeSessionJSONFromFuzzing caused by
fosite's new rotated_secrets OAuth client field. This new field
is currently not relevant to us as we have no private clients.
Signed-off-by: Monis Khan <mok@vmware.com>
This change updates the TLS config used by all pinniped components.
There are no configuration knobs associated with this change. Thus
this change tightens our static defaults.
There are four TLS config levels:
1. Secure (TLS 1.3 only)
2. Default (TLS 1.2+ best ciphers that are well supported)
3. Default LDAP (TLS 1.2+ with less good ciphers)
4. Legacy (currently unused, TLS 1.2+ with all non-broken ciphers)
Highlights per component:
1. pinniped CLI
- uses "secure" config against KAS
- uses "default" for all other connections
2. concierge
- uses "secure" config as an aggregated API server
- uses "default" config as a impersonation proxy API server
- uses "secure" config against KAS
- uses "default" config for JWT authenticater (mostly, see code)
- no changes to webhook authenticater (see code)
3. supervisor
- uses "default" config as a server
- uses "secure" config against KAS
- uses "default" config against OIDC IDPs
- uses "default LDAP" config against LDAP IDPs
Signed-off-by: Monis Khan <mok@vmware.com>
- If the upstream refresh fails, then fail the downstream refresh
- If the upstream refresh returns an ID token, then validate it (we
use its claims in the future, but not in this commit)
- If the upstream refresh returns a new refresh token, then save it
into the user's session in storage
- Pass the provider cache into the token handler so it can use the
cached providers to perform upstream refreshes
- Handle unexpected errors in the token handler where the user's session
does not contain the expected data. These should not be possible
in practice unless someone is manually editing the storage, but
handle them anyway just to be safe.
- Refactor to share the refresh code between the CLI and the token
endpoint by moving it into the UpstreamOIDCIdentityProviderI
interface, since the token endpoint needed it to be part of that
interface anyway
- Add `AllowPasswordGrant` boolean field to OIDCIdentityProvider's spec
- The oidc upstream watcher controller copies the value of
`AllowPasswordGrant` into the configuration of the cached provider
- Add password grant to the UpstreamOIDCIdentityProviderI interface
which is implemented by the cached provider instance for use in the
authorization endpoint
- Enhance the IDP discovery endpoint to return the supported "flows"
for each IDP ("cli_password" and/or "browser_authcode")
- Enhance `pinniped get kubeconfig` to help the user choose the desired
flow for the selected IDP, and to write the flow into the resulting
kubeconfg
- Enhance `pinniped login oidc` to have a flow flag to tell it which
client-side flow it should use for auth (CLI-based or browser-based)
- In the Dex config, allow the resource owner password grant, which Dex
implements to also return ID tokens, for use in integration tests
- Enhance the authorize endpoint to perform password grant when
requested by the incoming headers. This commit does not include unit
tests for the enhancements to the authorize endpoint, which will come
in the next commit
- Extract some shared helpers from the callback endpoint to share the
code with the authorize endpoint
- Add new integration tests
This may be a temporary fix. It switches the manual auth code prompt to use `promptForValue()` instead of `promptForSecret()`. The `promptForSecret()` function no longer supports cancellation (the v0.9.2 behavior) and the method of cancelling in `promptForValue()` is now based on running the blocking read in a background goroutine, which is allowed to block forever or leak (which is not important for our CLI use case).
This means that the authorization code is now visible in the user's terminal, but this is really not a big deal because of PKCE and the limited lifetime of an auth code.
The main goroutine now correctly waits for the "manual prompt" goroutine to clean up, which now includes printing the extra newline that would normally have been entered by the user in the manual flow.
The text of the manual login prompt is updated to be more concise and less scary (don't use the word "fail").
Signed-off-by: Matt Moyer <moyerm@vmware.com>
It turns out that `syscall.Stdin` is of type `int` on Linux and macOS, but not on Windows (it's `syscall.Handle`). This should now be portable and do all the require type casting on every platform.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
For CLI-based auth, such as with LDAP upstream identity providers, the
user may use these environment variables to avoid getting interactively
prompted for username and password.
I found that there are some situations with `response_mode=form_post` where Chrome will open additional speculative TCP connections. These connections will be idle so they block server shutdown until the (previously 5s) timeout. Lowering this to 500ms should be safe and makes any added latency at login much less noticeable.
More information about Chrome's TCP-level behavior here: https://bugs.chromium.org/p/chromium/issues/detail?id=116982#c5
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This flag is (for now) meant only to facilitate end-to-end testing, allowing us to force the "manual" login flow. If it ends up being useful we can un-hide it, but this seemed like the safest option to start with.
There is also a corresponding `--oidc-skip-listen` on the `pinniped get kubeconfig` command.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This adds a new login flow that allows manually pasting the authorization code instead of receiving a browser-based callback.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is a more restrictive library interface that more closely matches the use cases of our new form_post login flow.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This allows the prompts to be cancelled, which we need to be able to do in the case where we prompt for a manually-pasted auth code but the automatic callback succeeds.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
See RFC6648 which asks that people stop using `X-` on header names.
Also Matt preferred not mentioning "IDP" in the header name.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Also some light prefactoring in login.go to make room for LDAP-style
login, which is not implemented yet in this commit. TODOs are added.
- And fix a test pollution problem in login_oidc_test.go where it was
using a real on-disk CLI cache file, so the tests were polluted by
the contents of that file and would sometimes cause each other to
fail.
We have some nice normalization code in this package to remove expired or otherwise malformed cache entries, but we weren't calling it in the appropriate place.
Added calls to normalize the cache data structure before and after each transaction, and added test cases to ensure that it's being called.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
All controller unit tests were accidentally using a timeout context
for the informers, instead of a cancel context which stays alive until
each test is completely finished. There is no reason to risk
unpredictable behavior of a timeout being reached during an individual
test, even though with the previous 3 second timeout it could only be
reached on a machine which is running orders of magnitude slower than
usual, since each test usually runs in about 100-300 ms. Unfortunately,
sometimes our CI workers might get that slow.
This sparked a review of other usages of timeout contexts in other
tests, and all of them were increased to a minimum value of 1 minute,
under the rule of thumb that our tests will be more reliable on slow
machines if they "pass fast and fail slow".
This is a bit more clear. We're changing this now because it is a non-backwards-compatible change that we can make now since none of this RFC8693 token exchange stuff has been released yet.
There is also a small typo fix in some flag usages (s/RF8693/RFC8693/)
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- The overall timeout for logins is increased to 90 minutes.
- The timeout for token refresh is increased from 30 seconds to 60 seconds to be a bit more tolerant of extremely slow networks.
- A new, matching timeout of 60 seconds has been added for the OIDC discovery, auth code exchange, and RFC8693 token exchange operations.
The new code uses the `http.Client.Timeout` field rather than managing contexts on individual requests. This is easier because the OIDC package stores a context at creation time and tries to use it later when performing key refresh operations.
Signed-off-by: Matt Moyer <moyerm@vmware.com>