The Kube cert agent code is in [cmd/pinniped-concierge-kube-cert-agent/main.go](https://github.com/vmware-tanzu/pinniped/blob/main/cmd/pinniped-concierge-kube-cert-agent/main.go).
3. The Pinniped server components
There are three server components.
They are all compiled into a single binary in a single container image by the project's [Dockerfile](https://github.com/vmware-tanzu/pinniped/blob/main/Dockerfile).
The `main()` function chooses which component to start based on the path used to invoke the binary,
as seen in [cmd/pinniped-server/main.go](https://github.com/vmware-tanzu/pinniped/blob/main/cmd/pinniped-server/main.go).
- The Concierge can be installed on a cluster to authenticate users externally via dynamically registered authenticators,
and then allow those users to access the cluster's API server using that identity. The Concierge's entry point is
in [internal/concierge/server/server.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/concierge/server/server.go).
- The Supervisor can be installed on a central cluster to provide single-sign capabilities on to other clusters,
using various types of external identity providers as the source of user identity. The Supervisor's entry point is
in [internal/supervisor/server/server.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/supervisor/server/server.go).
- The Local User Authenticator is a component used only for integration testing and demos of the Concierge.
At this time, it is not intended for production use. It can be registered as a WebhookAuthenticator with the Concierge.
It is implemented in [internal/localuserauthenticator/localuserauthenticator.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/localuserauthenticator/localuserauthenticator.go).
## Deployment
The YAML manifests required to deploy the server-side components to Kubernetes clusters
are in the [deploy](https://github.com/vmware-tanzu/pinniped/tree/main/deploy) directory.
For each release, these ytt templates are rendered by the CI/CD system. The reference to the container image for that
release is templated in, but otherwise the default values from the respective `values.yaml` files are used.
The resulting manifests are attached to each [GitHub release of Pinniped](https://github.com/vmware-tanzu/pinniped/releases).
Users may use these pre-rendered manifests to install the Supervisor or Concierge.
Alternatively, a user may render the templates of any release themselves to customize the values in `values.yaml`
using ytt by following the [installation instructions for the Concierge]({{< ref "install-concierge" >}})
or [for the Supervisor]({{< ref "install-supervisor" >}}).
## Custom Resource Definitions (CRDs)
CRDs are used to configure both the Supervisor and the Concierge. The source code for these can be found in the `.tmpl`
files under the various subdirectories of the [apis](https://github.com/vmware-tanzu/pinniped/tree/main/apis) directory.
Any struct with the special `+kubebuilder:resource:` comment will become a CRD. After adding or changing one of these
files, the code generator may be executed by running [hack/update.sh](https://github.com/vmware-tanzu/pinniped/blob/main/hack/update.sh)
and the results will be written to the [generated](https://github.com/vmware-tanzu/pinniped/tree/main/generated) directory, where they may be committed.
Other `.tmpl` files that do not use the `+kubebuilder:resource:` comments will also be picked up by the code generator.
These will not become CRDs, but are also considered part of Pinniped's public API for golang client code to use.
## Controllers
Both the Supervisor and Concierge components use Kubernetes-style controllers to watch resources and to take action
to converge towards a desired state. For example, all the Pinniped CRDs are watched by controllers.
All controllers are written using a custom controller library which is in the
Each server component uses `controllerlib.NewManager()` and then adds all of its controller instances to the manager
on subsequent lines,
which can be read as a catalog of all controllers. This happens:
- For the Concierge, in [internal/controllermanager/prepare_controllers.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/controllermanager/prepare_controllers.go)
- For the Supervisor, in [internal/supervisor/server/server.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/supervisor/server/server.go)
### Controller patterns
Each controller mostly follows a general pattern:
1. Each has a constructor-like function which internally registers which resources the controller would like to watch.
2. Whenever one of the watched resources changes, or whenever about 3 minutes has elapsed, the `Sync()` method is called.
3. The `Sync()` method reads state from informer caches to understand the actual current state of the world.
4. The `Sync()` method then performs business logic to determine the desired state of the world, and makes updates to the
world to converge towards the desired state. It may create/update/delete Kubernetes resources by calling the Kubernetes API,
or it may update an in-memory cache of objects that are shared by other parts of the code (often an API endpoint's implementation),
or it may perform other updates.
The `Sync()` method is generally written to be idempotent and reasonably performant because it can be called fairly often.
Some controllers are written to collaborate with other controllers. For example, one controller might create a Secret
and annotate it with an expiration timestamp, while another controller watches those Secrets to delete any that are beyond
their expiration time.
A simple example of a controller which employs these patterns is in
so they are unable to compete to make changes to Kubernetes resources. These failed write operations will appear in the logs
as write errors due to not being the leader. While this might look like an error, this is normal for the controllers.
This still allows the non-leader Pods' controllers to read state, update in-memory caches, etc.
## Concierge API endpoints
The Concierge hosts the following endpoints, which are automatically registered with the Kubernetes API server
as aggregated API endpoints, which makes them appear to a client almost as if they were built into Kubernetes itself.
-`TokenCredentialRequest` can receive a token, pass the token to an external authenticator to authenticate the user,
and then return a short-lived mTLS client certificate keypair which can be used to gain access to the Kuberetes API
as that user.
It is in [internal/registry/credentialrequest/rest.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/registry/credentialrequest/rest.go).
-`WhoAmI` will return basic details about the currently authenticated user.
It is in [internal/registry/whoamirequest/rest.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/registry/whoamirequest/rest.go).
The Concierge may also run an impersonation proxy service. This is not an aggregated API endpoint, so it needs to be
exposed outside the cluster as a Service. When operating this mode, a client's kubeconfig causes the client to
make all Kubernetes API requests to the impersonation proxy endpoint instead of the real API server. The impersonation
proxy then authenticates the user and calls the real Kubernetes API on their behalf.
Calls made to the real API server are made as a service account using impersonation to impersonate the identity of the end user.
The code tries to reuse as much code from Kubernetes itself as possible, so it can behave as closely as possible
to the real API server from the client's point of view. It can be found in
-`<issuer_path>/.well-known/openid-configuration` is the standard OIDC discovery endpoint, which can be used to discover all the other endpoints listed here.
See [internal/oidc/discovery/discovery_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/discovery/discovery_handler.go).
-`<issuer_path>/jwks.json` is the standard OIDC JWKS discovery endpoint.
See [internal/oidc/jwks/jwks_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/jwks/jwks_handler.go).
-`<issuer_path>/oauth2/authorize` is the standard OIDC authorize endpoint.
See [internal/oidc/auth/auth_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/auth/auth_handler.go).
-`<issuer_path>/oauth2/token` is the standard OIDC token endpoint.
See [internal/oidc/token/token_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/token/token_handler.go).
-`<issuer_path>/callback` is a special endpoint that is used as the redirect URL when performing an OIDC authcode flow against an upstream OIDC identity provider as configured by an OIDCIdentityProvider custom resource.
See [internal/oidc/callback/callback_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/callback/callback_handler.go).
-`<issuer_path>/v1alpha1/pinniped_identity_providers` is a custom discovery endpoint for clients to learn about available upstream identity providers.
See [internal/oidc/idpdiscovery/idp_discovery_handler.go](https://github.com/vmware-tanzu/pinniped/blob/main/internal/oidc/idpdiscovery/idp_discovery_handler.go).