For password based login on the CLI (i.e. no browser), this change
relaxes the response code check to allow for any redirect code
handled by the Go standard library. In the future, we can drop the
rewriteStatusSeeOtherToStatusFoundForBrowserless logic from the
server side code.
Signed-off-by: Monis Khan <mok@vmware.com>
Highlights from this dep bump:
1. Made a copy of the v0.4.0 github.com/go-logr/stdr implementation
for use in tests. We must bump this dep as Kube code uses a
newer version now. We would have to rewrite hundreds of test log
assertions without this copy.
2. Use github.com/felixge/httpsnoop to undo the changes made by
ory/fosite#636 for CLI based login flows. This is required for
backwards compatibility with older versions of our CLI. A
separate change after this will update the CLI to be more
flexible (it is purposefully not part of this change to confirm
that we did not break anything). For all browser login flows, we
now redirect using http.StatusSeeOther instead of http.StatusFound.
3. Drop plog.RemoveKlogGlobalFlags as klog no longer mutates global
process flags
4. Only bump github.com/ory/x to v0.0.297 instead of the latest
v0.0.321 because v0.0.298+ pulls in a newer version of
go.opentelemetry.io/otel/semconv which breaks k8s.io/apiserver.
We should update k8s.io/apiserver to use the newer code.
5. Migrate all code from k8s.io/apimachinery/pkg/util/clock to
k8s.io/utils/clock and k8s.io/utils/clock/testing
6. Delete testutil.NewDeleteOptionsRecorder and migrate to the new
kubetesting.NewDeleteActionWithOptions
7. Updated ExpectedAuthorizeCodeSessionJSONFromFuzzing caused by
fosite's new rotated_secrets OAuth client field. This new field
is currently not relevant to us as we have no private clients.
Signed-off-by: Monis Khan <mok@vmware.com>
Also refactor the code that decides which types of revocation failures
are worth retrying. Be more selective by only retrying those types of
errors that are likely to be worth retrying.
- Rename the RevokeRefreshToken() function to RevokeToken() and make it
take the token type (refresh or access) as a new parameter.
- This is a prefactor getting ready to support revocation of upstream
access tokens in the garbage collection handler.
This change updates the TLS config used by all pinniped components.
There are no configuration knobs associated with this change. Thus
this change tightens our static defaults.
There are four TLS config levels:
1. Secure (TLS 1.3 only)
2. Default (TLS 1.2+ best ciphers that are well supported)
3. Default LDAP (TLS 1.2+ with less good ciphers)
4. Legacy (currently unused, TLS 1.2+ with all non-broken ciphers)
Highlights per component:
1. pinniped CLI
- uses "secure" config against KAS
- uses "default" for all other connections
2. concierge
- uses "secure" config as an aggregated API server
- uses "default" config as a impersonation proxy API server
- uses "secure" config against KAS
- uses "default" config for JWT authenticater (mostly, see code)
- no changes to webhook authenticater (see code)
3. supervisor
- uses "default" config as a server
- uses "secure" config against KAS
- uses "default" config against OIDC IDPs
- uses "default LDAP" config against LDAP IDPs
Signed-off-by: Monis Khan <mok@vmware.com>
- Used to determine on which port the impersonation proxy will bind
- Defaults to 8444, which is the old hard-coded port value
- Allow the port number to be configured to any value within the
range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
values file, so while it is possible to change this port without
needing to recompile, it is not convenient
- Allow the port number to be configured to any value within the
range 1024 to 65535
- This commit does not include adding new config knobs to the ytt
values file, so while it is possible to change this port without
needing to recompile, it is not convenient
- pull construction of authenticators.Response into searchAndBindUser
- remove information about the identity provider in the error that gets
returned to users. Put it in debug instead, where it may show up in
logs.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
- changed to use custom authenticators.Response rather than the k8s one
that doesn't include space for a DN
- Added more checking for correct idp type in token handler
- small style changes
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This stores the user DN in the session data upon login and checks that
the entry still exists upon refresh. It doesn't check anything
else about the entry yet.
- Discover the revocation endpoint of the upstream provider in
oidc_upstream_watcher.go and save it into the cache for future use
by the garbage collector controller
- Adds RevokeRefreshToken to UpstreamOIDCIdentityProviderI
- Implements the production version of RevokeRefreshToken
- Implements test doubles for RevokeRefreshToken for future use in
garbage collector's unit tests
- Prefactors the crud and session storage types for future use in the
garbage collector controller
- See remaining TODOs in garbage_collector.go
Otherwise, the CA and proxy settings will not be used for the call
to the upstream token endpoint while performing the refresh. This
mistake was exposed by the TestSupervisorLogin integration test, so
it has test coverage.
- If the upstream refresh fails, then fail the downstream refresh
- If the upstream refresh returns an ID token, then validate it (we
use its claims in the future, but not in this commit)
- If the upstream refresh returns a new refresh token, then save it
into the user's session in storage
- Pass the provider cache into the token handler so it can use the
cached providers to perform upstream refreshes
- Handle unexpected errors in the token handler where the user's session
does not contain the expected data. These should not be possible
in practice unless someone is manually editing the storage, but
handle them anyway just to be safe.
- Refactor to share the refresh code between the CLI and the token
endpoint by moving it into the UpstreamOIDCIdentityProviderI
interface, since the token endpoint needed it to be part of that
interface anyway
- Requiring refresh tokens to be returned from upstream OIDC idps
- Storing refresh tokens (for oidc) and idp information (for all idps) in custom session data during authentication
- Don't pass access=offline all the time
- throw an error when prompt=none because the spec says we can't ignore
it
- ignore the other prompt params
Signed-off-by: Ryan Richard <richardry@vmware.com>
This will allow us to store custom data inside the fosite session
storage for all downstream OIDC sessions.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This change fixes a copy paste error that led to the impersonation
proxy signer CA being rotated based on the configuration of the
rotation of the aggregated API serving certificate. This would lead
to occasional "Unauthorized" flakes in our CI environments that
rotate the serving certificate at a frequent interval.
Updated the certs_expirer controller logs to be more detailed.
Updated CA common names to be more specific (this does not update
any previously generated CAs).
Signed-off-by: Monis Khan <mok@vmware.com>
At debug level:
upstreamoidc.go:213] "claims from ID token and userinfo"
providerName="oidc"
keys=[at_hash aud email email_verified exp iat iss sub]
At all level:
upstreamoidc.go:207] "claims from ID token and userinfo"
providerName="oidc"
claims="{\"at_hash\":\"C55S-BgnHTmr2_TNf...hYmVhYWESBWxvY2Fs\"}"
Signed-off-by: Monis Khan <mok@vmware.com>
This change updates the kube cert agent to a middle ground behavior
that balances leader election gating with how quickly we load the
signer.
If the agent labels have not changed, we will attempt to load the
signer even if we cannot roll out the latest version of the kube
cert agent deployment.
This gives us the best behavior - we do not have controllers
fighting over the state of the deployment and we still get the
signer loaded quickly.
We will have a minute of downtime when the kube cert agent deployment
changes because the new pods will have to wait to become a leader
and for the new deployment to rollout the new pods. We would need
to have a per pod deployment if we want to avoid that downtime (but
this would come at the cost of startup time and would require
coordination with the kubelet in regards to pod readiness).
Signed-off-by: Monis Khan <mok@vmware.com>
This change updates our certificate code to use the same 5 minute
backdate that is used by the Kubernetes controller manager. This
helps to account for clock skews between the API servers and the
kubelets that are running the pinniped pods. While this backdating
reflects a large percentage of the lifetime of our short lived
certificates (100% for the 5 minute client certificates), even a 10
minute irrevocable client certificate is within our limits. When
we move to the CSR based short lived certificates, they will always
have at least a 15 minute lifetime (5 minute backdating plus 10 minute
minimum valid duration).
Signed-off-by: Monis Khan <mok@vmware.com>
CertificatesV1beta1 was removed in Kube 1.22, so the tests cannot
blindly rely on it anymore. Use CertificatesV1 whenever the server
reports that is available, and otherwise use the old
CertificatesV1beta1.
Note that CertificatesV1 was introduced in Kube 1.19.
This commit makes the following changes to the kube cert agent tests:
1. Informers are synced on start using the controllerinit code
2. Deployment client and informer are synced per controller sync loop
3. Controller sync loop exits after two consistent errors
4. Use assert instead of require to avoid ending the test early
Signed-off-by: Monis Khan <mok@vmware.com>
Not required, but within the spirit of using the version number.
Since the existing kube cert agent deployment will get deleted anyway
during an upgrade, it shouldn't hurt to change the version number.
New installations will get the new version number on the new kube cert
agent deployment.
Fixes#801. The solution is complicated by the fact that the Selector
field of Deployments is immutable. It would have been easy to just
make the Selectors of the main Concierge Deployment, the Kube cert agent
Deployment, and the various Services use more specific labels, but
that would break upgrades. Instead, we make the Pod template labels and
the Service selectors more specific, because those not immutable, and
then handle the Deployment selectors in a special way.
For the main Concierge and Supervisor Deployments, we cannot change
their selectors, so they remain "app: app_name", and we make other
changes to ensure that only the intended pods are selected. We keep the
original "app" label on those pods and remove the "app" label from the
pods of the Kube cert agent Deployment. By removing it from the Kube
cert agent pods, there is no longer any chance that they will
accidentally get selected by the main Concierge Deployment.
For the Kube cert agent Deployment, we can change the immutable selector
by deleting and recreating the Deployment. The new selector uses only
the unique label that has always been applied to the pods of that
deployment. Upon recreation, these pods no longer have the "app" label,
so they will not be selected by the main Concierge Deployment's
selector.
The selector of all Services have been updated to use new labels to
more specifically target the intended pods. For the Concierge Services,
this will prevent them from accidentally including the Kube cert agent
pods. For the Supervisor Services, we follow the same convention just
to be consistent and to help future-proof the Supervisor app in case it
ever has a second Deployment added to it.
The selector of the auto-created impersonation proxy Service was
also previously using the "app" label. There is no change to this
Service because that label will now select the correct pods, since
the Kube cert agent pods no longer have that label. It would be possible
to update that selector to use the new more specific label, but then we
would need to invent a way to pass that label into the controller, so
it seemed like more work than was justified.
Changes made to both components:
1. Logs are always flushed on process exit
2. Informer cache sync can no longer hang process start up forever
Changes made to concierge:
1. Add pre-shutdown hook that waits for controllers to exit cleanly
2. Informer caches are synced in post-start hook
Changes made to supervisor:
1. Add shutdown code that waits for controllers to exit cleanly
2. Add shutdown code that waits for active connections to become idle
Waiting for controllers to exit cleanly is critical as this allows
the leader election logic to release the lock on exit. This reduces
the time needed for the next leader to be elected.
Signed-off-by: Monis Khan <mok@vmware.com>
The kubelet will send the SIGTERM signal when it wants a process to
exit. After a grace period, it will send the SIGKILL signal to
force the process to terminate. The concierge has always handled
both SIGINT and SIGTERM as indicators for it to gracefully exit
(i.e. stop watches, controllers, etc). This change updates the
supervisor to do the same (previously it only handled SIGINT). This
is required to allow the leader election lock release logic to run.
Otherwise it can take a few minutes for new pods to acquire the
lease since they believe it is already held.
Signed-off-by: Monis Khan <mok@vmware.com>
For clusters where the control plane nodes aren't running a CNI, the
kube-cert-agent pods deployed by concierge cannot be scheduled as they
don't know to use `hostNetwork: true`. This change allows embedding the
host network setting in the Concierge configuration. (by copying it from
the kube-controller-manager pod spec when generating the kube-cert-agent
Deployment)
Also fixed a stray double comma in one of the nearby tests.
OpenShift has good defaults for these duration fields that we can
use instead of coming up with them ourselves:
e14e06ba8d/pkg/config/leaderelection/leaderelection.go (L87-L109)
Copied here for easy future reference:
// We want to be able to tolerate 60s of kube-apiserver disruption without causing pod restarts.
// We want the graceful lease re-acquisition fairly quick to avoid waits on new deployments and other rollouts.
// We want a single set of guidance for nearly every lease in openshift. If you're special, we'll let you know.
// 1. clock skew tolerance is leaseDuration-renewDeadline == 30s
// 2. kube-apiserver downtime tolerance is == 78s
// lastRetry=floor(renewDeadline/retryPeriod)*retryPeriod == 104
// downtimeTolerance = lastRetry-retryPeriod == 78s
// 3. worst non-graceful lease acquisition is leaseDuration+retryPeriod == 163s
// 4. worst graceful lease acquisition is retryPeriod == 26s
if ret.LeaseDuration.Duration == 0 {
ret.LeaseDuration.Duration = 137 * time.Second
}
if ret.RenewDeadline.Duration == 0 {
// this gives 107/26=4 retries and allows for 137-107=30 seconds of clock skew
// if the kube-apiserver is unavailable for 60s starting just before t=26 (the first renew),
// then we will retry on 26s intervals until t=104 (kube-apiserver came back up at 86), and there will
// be 33 seconds of extra time before the lease is lost.
ret.RenewDeadline.Duration = 107 * time.Second
}
if ret.RetryPeriod.Duration == 0 {
ret.RetryPeriod.Duration = 26 * time.Second
}
Signed-off-by: Monis Khan <mok@vmware.com>
This change fixes a small race condition that occurred when the
current leader failed to renew its lease. Before this change, the
leader would first release the lease via the Kube API and then would
update its in-memory status to reflect that change. Now those
events occur in the reverse (i.e. correct) order.
Signed-off-by: Monis Khan <mok@vmware.com>
- Change list of attributeParsingOverrides to a map
- Add unit test for sAMAccountName as group name without the override
- Change some comments in the the type definition.
The Kube API server code that we use will cast inputs in an attempt
to see if they implement optional interfaces. This change adds a
simple wrapper struct to prevent such casts from causing us any
issues.
Signed-off-by: Monis Khan <mok@vmware.com>
In the upstream dynamiccertificates package, we rely on two pieces
of code:
1. DynamicServingCertificateController.newTLSContent which calls
- clientCA.CurrentCABundleContent
- servingCert.CurrentCertKeyContent
2. unionCAContent.VerifyOptions which calls
- unionCAContent.CurrentCABundleContent
This results in calls to our tlsServingCertDynamicCertProvider and
impersonationSigningCertProvider. If we Unset these providers, we
subtly break these consumers. At best this results in test slowness
and flakes while we wait for reconcile loops to converge. At worst,
it results in actual errors during runtime. For example, we
previously would Unset the impersonationSigningCertProvider on any
sync loop error (even a transient one caused by a network blip or
a conflict between writes from different replicas of the concierge).
This would cause us to transiently fail to issue new certificates
from the token credential require API. It would also cause us to
transiently fail to authenticate previously issued client certs
(which results in occasional Unauthorized errors in CI).
Signed-off-by: Monis Khan <mok@vmware.com>