ContainerImage.Pinniped/site/content/posts/2023-08-09-v0.25.0-impersonation-proxy-with-external-certs.md
2023-08-10 09:00:16 -05:00

12 KiB
Raw Blame History

title slug date author image excerpt tags
Pinniped v0.25.0: With External Certificate Management for the Impersonation Proxy and more v0-25-0-external-cert-mgmt-impersonation-proxy 2023-08-09 Joshua T. Casey https://images.unsplash.com/photo-1618075254460-429d47b887c7?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2148&q=80 With v0.25.0 you get external certificate management for the impersonation proxy, easier scheduling of the kube-cert-agent, and more
Joshua T. Casey
Ryan Richard
Benjamin Petersen
release
kubernetes
pki
pinniped
tls
mtls
kind
contour
cert-manager

Friendly seal Photo by karlheinz_eckhardt Eckhardt on Unsplash

With Pinniped v0.25.0 you get the ability to configure an externally-generated certificate for Pinnniped Concierge's impersonation proxy to serve TLS.

To read more on this feature, and the design decisions behind it, see the proposal. To read more about the impersonation proxy, see the docs.

To see the feature in practice on a local kind cluster, follow these instructions. This will perform mTLS between your local client (kubectl and the pinniped CLI) and the impersonation proxy.

The setup: Using a kind cluster, Contour as an ingress to the impersonation proxy, and cert-manager to generate a TLS serving cert.

Docker desktop v1.20.1
Kind v0.20.0
Contour v1.25.2
Pinniped v0.25.0
pinniped CLI v0.25.0 (https://pinniped.dev/docs/howto/install-cli/)
cert-manager v1.12.3

Set up kind to run with Contour, using the example kind cluster configuration file provided by Contour.

$ wget https://raw.githubusercontent.com/projectcontour/contour/main/examples/kind/kind-expose-port.yaml
$ kind create cluster \
    --config kind-expose-port.yaml \
    --name kind-with-contour \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Install Contour (see https://projectcontour.io/getting-started/ for more details).

# From https://projectcontour.io/getting-started/
$ kubectl apply \
    --filename https://projectcontour.io/quickstart/contour.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml
# Verify that the Contour pods are ready
$ kubectl get pods \
    --namespace projectcontour \
    --output wide \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Install Pinnipeds local-user-authenticator and add some sample users (see https://pinniped.dev/docs/tutorials/concierge-only-demo/ for more details).

# Install Pinniped's local-user-authenticator
$ kubectl apply \
    --filename https://get.pinniped.dev/v0.25.0/install-local-user-authenticator.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml
# Create a local user "pinny" with password "password123" and group "group-for-mtls".
# Each secret in this namespace acts like a user definition.
$ kubectl create secret generic pinny \
    --namespace local-user-authenticator \
    --from-literal=groups=group-for-mtls \
    --from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://") \
    --kubeconfig kind-with-contour.kubeconfig.yaml
# We'll need the CA bundle of the local-user-authenticator service to configure the Concierge's WebhookAuthenticator.
# Just make sure this next command does print out the TLS secret, which can take a few seconds to generate.
$ kubectl get secret local-user-authenticator-tls-serving-certificate \
    --namespace local-user-authenticator \
    --output jsonpath={.data.caCertificate} \
    --kubeconfig kind-with-contour.kubeconfig.yaml \
    | tee local-user-authenticator-ca.pem.b64

Install Pinnipeds Concierge:

$ kubectl apply \
    --filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-crds.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

$ kubectl apply \
    --filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-resources.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Install cert-manager:

$ kubectl apply \
    --filename https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Configure a cert-manager certificate for the impersonation proxy to serve TLS.

Note that this section bootstraps a CA Issuer used to issue leaf certificates that can be used to serve TLS. For more information on this, see the cert-manager docs. The Certificate with name impersonation-serving-cert will generate the leaf certificate used by the impersonation proxy to serve TLS.

$ cat << EOF > self-signed-cert.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-cluster-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
  namespace: cert-manager
spec:
  isCA: true
  commonName: my-selfsigned-ca
  secretName: self-signed-ca-for-kind-testing
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-cluster-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: my-ca-issuer
spec:
  ca:
    secretName: self-signed-ca-for-kind-testing
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: impersonation-serving-cert
  namespace: pinniped-concierge
spec:
  secretName: impersonation-proxy-tls-serving-cert

  duration: 2160h # 90d
  renewBefore: 360h # 15d
  subject:
    organizations:
    - Pinniped
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
  - server auth
  dnsNames:
  - impersonation-proxy-mtls.local
  issuerRef:
    name: my-ca-issuer
    kind: ClusterIssuer
    group: cert-manager.io

EOF

$ kubectl apply \
    --filename self-signed-cert.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Download the root (self-signed) CA's certificate so that it can be advertised as the CA bundle for the Concierge impersonation proxy:

$ kubectl get secret self-signed-ca-for-kind-testing \
    --namespace cert-manager \
    --output jsonpath="{.data.ca\.crt}" \
    --kubeconfig kind-with-contour.kubeconfig.yaml \
    | tee self-signed-ca-for-kind-testing.pem.b64
    
# Tip: Put the contents of self-signed-ca-for-kind-testing.pem.b64 into your copy buffer for a later step!

Now update the CredentialIssuer to use the impersonation proxy (which is disabled on kind by default):

$ kubectl edit credentialissuer pinniped-concierge-config \
    --kubeconfig kind-with-contour.kubeconfig.yaml
# Make sure that the spec has the following values:
...
  spec:
    impersonationProxy:
      externalEndpoint: impersonation-proxy-mtls.local
      mode: enabled
      service:
        type: ClusterIP
      tls:
        certificateAuthorityData: # paste the contents of the file self-signed-ca-for-kind-testing.pem.b64
        secretName: impersonation-proxy-tls-serving-cert
...
# Now save and close the text editor

# Confirm that the CredentialIssuer looks as expected
$ kubectl get credentialissuers pinniped-concierge-config \
    --output yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml
...
  spec:
    impersonationProxy:
      externalEndpoint: impersonation-proxy-mtls.local
      mode: enabled
      service:
        annotations:
          # Ignore any annotations
        type: ClusterIP
      tls:
        certificateAuthorityData: LS0tLUJFR0l..........
        secretName: impersonation-proxy-tls-serving-cert
...

# Confirm that the ClusterIP service for the impersonation proxy was automatically created (may take a minute)
$ kubectl get service pinniped-concierge-impersonation-proxy-cluster-ip \
    --namespace pinniped-concierge \
    --output yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

# Configure a webhook authenticator to tell Concierge to validate static tokens using the installed local-user-authenticator
$ cat << EOF > concierge.webhookauthenticator.yaml
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: WebhookAuthenticator
metadata:
  name: local-user-authenticator
spec:
  endpoint: https://local-user-authenticator.local-user-authenticator.svc/authenticate
  tls:
    certificateAuthorityData: $(cat local-user-authenticator-ca.pem.b64)
EOF

# Create the webhook authenticator
$ kubectl apply \
    --filename concierge.webhookauthenticator.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Now deploy a Contour HTTPProxy ingress that fronts the ClusterIP service for the impersonation proxy.

We need to use TLS passthrough in this case, so that the client (kubectl and the pinniped CLI) can establish TLS directly with the impersonation proxy, and so that client certs used for mTLS will be sent to the impersonation proxy.

Note in particular the spec.tcpproxy block, which is different than the typical spec.rules block. spec.tcpproxy is required when using spec.virtualhost.tls.passthrough: true.

See https://projectcontour.io/docs/1.25/config/tls-termination/#tls-session-passthrough for more details.

$ cat << EOF > contour-ingress-impersonation-proxy.yaml
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: impersonation-proxy
  namespace: pinniped-concierge
spec:
  virtualhost:
    fqdn: impersonation-proxy-mtls.local
    tls:
      passthrough: true
  tcpproxy:
    services:
    - name: pinniped-concierge-impersonation-proxy-cluster-ip
      port: 443
EOF

$ kubectl apply \
    --filename contour-ingress-impersonation-proxy.yaml \
    --kubeconfig kind-with-contour.kubeconfig.yaml

Now generate the Pinniped kubeconfig so that you can perform mTLS with the impersonation proxy.

Note that using a static-token does embed those credentials into your kubeconfig. Never use local-user-authenticator in production.

# add 127.0.0.1 impersonation-proxy-mtls.local to your /etc/hosts!
$ pinniped get kubeconfig \
    --static-token "pinny:password123" \
    --concierge-authenticator-type webhook \
    --concierge-authenticator-name local-user-authenticator \
    --concierge-mode ImpersonationProxy \
    --kubeconfig kind-with-contour.kubeconfig.yaml \
    > pinniped-kubeconfig.yaml

Now perform an action as user pinny!

$ kubectl get pods -A \
    --kubeconfig pinniped-kubeconfig.yaml
Error from server (Forbidden): pods is forbidden: User "pinny" cannot list resource "pods" in API group "" at the cluster scope: decision made by impersonation-proxy.concierge.pinniped.dev

This does result in an error because the cluster does not have any RoleBindings or ClusterRoleBindings that allow your user pinny or the group group-for-mtls to perform any actions on the cluster. Lets make a ClusterRoleBinding that grants this group cluster admin privileges.

# Perform this as the cluster admin using the kind kubeconfig
$ kubectl create clusterrolebinding mtls-admins \
    --clusterrole=cluster-admin \
    --group=group-for-mtls \
    --kubeconfig kind-with-contour.kubeconfig.yaml
# Now try again with the Pinniped kubeconfig
$ kubectl get pods -A \
    --kubeconfig pinniped-kubeconfig.yaml
NAMESPACE                  NAME                                                      READY   STATUS      RESTARTS       AGE
pinniped-concierge         pinniped-concierge-f4c78b674-bt6zl                        1/1     Running     0              3h36m

Congratulations, you have successfully performed mTLS authentication between your local client (kubectl, using the pinniped CLI) and the impersonation proxy inside the cluster.

To verify that your username and groups are visible to Kubernetes, run the pinniped whoami command.

pinniped whoami \
    --kubeconfig pinniped-kubeconfig.yaml

Now, to verify that the generated kubeconfig pinniped-kubeconfig.yaml has the contents of self-signed-ca-for-kind-testing.pem.b64 as the contained CA bundle for the cluster, simply cat out both files to compare.