ContainerImage.Pinniped/deploy/supervisor
Ryan Richard 122f7cffdb Make the supervisor healthz endpoint public
Based on our experiences today with GKE, it will be easier for our users
to configure Ingress health checks if the healthz endpoint is available
on the same public port as the OIDC endpoints.

Also add an integration test for the healthz endpoint now that it is
public.

Also add the optional `containers[].ports.containerPort` to the
supervisor Deployment because the GKE docs say that GKE will look
at that field while inferring how to invoke the health endpoint. See
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc
2020-10-21 15:24:58 -07:00
..
config.pinniped.dev_oidcproviderconfigs.yaml supervisor-generate-key: initial spike 2020-10-14 09:47:34 -04:00
deployment.yaml Make the supervisor healthz endpoint public 2020-10-21 15:24:58 -07:00
helpers.lib.yaml Support installing concierge and supervisor into existing namespace 2020-10-14 15:05:42 -07:00
rbac.yaml Merge branch 'main' into label_every_resource 2020-10-15 10:19:03 -07:00
README.md deploy/supervisor: type: eaxmple -> example 2020-10-14 09:22:15 -04:00
service.yaml Support installing concierge and supervisor into existing namespace 2020-10-14 15:05:42 -07:00
values.yaml Concierge controllers add labels to all created resources 2020-10-15 10:14:23 -07:00
z0_crd_overlay.yaml Support installing concierge and supervisor into existing namespace 2020-10-14 15:05:42 -07:00

Deploying the Pinniped Supervisor

What is the Pinniped Supervisor?

The Pinniped Supervisor app is a component of the Pinniped OIDC and Cluster Federation solutions. It can be deployed when those features are needed.

Installing the Latest Version with Default Options

kubectl apply -f https://github.com/vmware-tanzu/pinniped/releases/latest/download/install-supervisor.yaml

Installing an Older Version with Default Options

Choose your preferred release version number and use it to replace the version number in the URL below.

# Replace v0.3.0 with your preferred version in the URL below
kubectl apply -f https://github.com/vmware-tanzu/pinniped/releases/download/v0.3.0/install-supervisor.yaml

Installing with Custom Options

Creating your own deployment YAML file requires ytt from Carvel to template the YAML files in the deploy/supervisor directory. Either install ytt or use the container image from Dockerhub.

  1. git clone this repo and git checkout the release version tag of the release that you would like to deploy.
  2. The configuration options are in deploy/supervisor/values.yml. Fill in the values in that file, or override those values using additional ytt command-line options in the command below. Use the release version tag as the image_tag value.
  3. In a terminal, cd to this deploy/supervisor directory
  4. To generate the final YAML files, run ytt --file .
  5. Deploy the generated YAML using your preferred deployment tool, such as kubectl or kapp. For example: ytt --file . | kapp deploy --yes --app pinniped-supervisor --diff-changes --file -

Configuring After Installing

Exposing the Supervisor App as a Service

Create a Service to make the app available outside of the cluster. If you installed using ytt then you can use the related service_*_port options from deploy/supervisor/values.yml to create a Service, instead of creating them manually as shown below.

Using a LoadBalancer Service

Using a LoadBalancer Service is probably the easiest way to expose the Supervisor app, if your cluster supports LoadBalancer Services. For example:

apiVersion: v1
kind: Service
metadata:
  name: pinniped-supervisor-loadbalancer
  namespace: pinniped-supervisor
  labels:
    app: pinniped-supervisor
spec:
  type: LoadBalancer
  selector:
    app: pinniped-supervisor
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Using a NodePort Service

A NodePort Service exposes the app as a port on the nodes of the cluster. This is convenient for use with kind clusters, because kind can expose node ports as localhost ports on the host machine.

For example:

apiVersion: v1
kind: Service
metadata:
  name: pinniped-supervisor-nodeport
  namespace: pinniped-supervisor
  labels:
    app: pinniped-supervisor
spec:
  type: NodePort
  selector:
    app: pinniped-supervisor
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 31234

Configuring the Supervisor to Act as an OIDC Provider

The Supervisor can be configured as an OIDC provider by creating OIDCProviderConfig resources in the same namespace where the Supervisor app was installed. For example:

apiVersion: config.pinniped.dev/v1alpha1
kind: OIDCProviderConfig
metadata:
  name: my-provider
  namespace: pinniped-supervisor
spec:
  issuer: https://my-issuer.example.com