18 KiB
title | slug | date | author | authors | image | excerpt | tags | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pinniped v0.25.0: With External Certificate Management for the Impersonation Proxy and more! | v0-25-0-external-cert-mgmt-impersonation-proxy | 2023-08-09 | Joshua T. Casey |
|
https://images.unsplash.com/photo-1618075254460-429d47b887c7?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2148&q=80 | With v0.25.0 you get external certificate management for the impersonation proxy, easier scheduling of the kube-cert-agent, and more |
|
Photo by karlheinz_eckhardt Eckhardt on Unsplash
With Pinniped v0.25.0 you get the ability to configure an externally-generated certificate for Pinniped Concierge's impersonation proxy to serve TLS. The impersonation proxy is a component within Pinniped that allows the project to support many types of clusters, such as Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS).
To read more on this feature, and the design decisions behind it, see the proposal. To read more about the impersonation proxy, see the docs.
To see the feature in practice on a local kind cluster, follow these instructions. This will perform mTLS between your local client (kubectl and the pinniped CLI) and the impersonation proxy.
The setup: We will be using a kind cluster, Contour as an ingress to the impersonation proxy, and cert-manager to generate a TLS serving cert. The setup: We will be using a kind cluster, Contour as an ingress to the impersonation proxy, and cert-manager to generate a TLS serving cert.
Docker desktop v1.20.1
Kind v0.20.0
Contour v1.25.2
Pinniped v0.25.0
pinniped CLI v0.25.0 (https://pinniped.dev/docs/howto/install-cli/)
cert-manager v1.12.3
Set up kind to run with Contour, using the example kind cluster configuration file provided by Contour.
$ wget https://raw.githubusercontent.com/projectcontour/contour/main/examples/kind/kind-expose-port.yaml
# the --kubeconfig flag on the "create cluster" command will automatically export the kubeconfig file for us
$ kind create cluster \
--config kind-expose-port.yaml \
--name kind-with-contour \
--kubeconfig kind-with-contour.kubeconfig.yaml
Now we will install Contour (see https://projectcontour.io/getting-started/ for more details). Contour provides our kind cluster with an Ingress Controller. We will later deploy a Contour HTTPProxy to create DNS that we can use to access the impersonation proxy.
# From https://projectcontour.io/getting-started/
$ kubectl apply \
--filename https://projectcontour.io/quickstart/contour.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Verify that the Contour pods are ready
$ kubectl get pods \
--namespace projectcontour \
--output wide \
--kubeconfig kind-with-contour.kubeconfig.yaml
Pinniped's local-user-authenticator will act as a dummy identity provider for our example. This resource is not for production use but is sufficient for our needs to exercise the new feature of the impersonation proxy. Install Pinniped’s local-user-authenticator and add some sample users (see https://pinniped.dev/docs/tutorials/concierge-only-demo/ for more details).
# Install Pinniped's local-user-authenticator
$ kubectl apply \
--filename https://get.pinniped.dev/v0.25.0/install-local-user-authenticator.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Create a local user "pinny" with password "password123" and group "group-for-mtls".
# Each secret in this namespace acts like a user definition.
$ kubectl create secret generic pinny \
--namespace local-user-authenticator \
--from-literal=groups=group-for-mtls \
--from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://") \
--kubeconfig kind-with-contour.kubeconfig.yaml
# We'll need the CA bundle of the local-user-authenticator service to configure the Concierge's WebhookAuthenticator.
# Just make sure this next command does print out the TLS secret, which can take a few seconds to generate.
$ kubectl get secret local-user-authenticator-tls-serving-certificate \
--namespace local-user-authenticator \
--output jsonpath={.data.caCertificate} \
--kubeconfig kind-with-contour.kubeconfig.yaml \
| tee local-user-authenticator-ca.pem.b64
In this example, we are only interacting with the Pinniped's Concierge. The Supervisor is not in use as we are not interacting with a real external OIDC identity provider. Install Pinniped's Concierge:
$ kubectl apply \
--filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-crds.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
$ kubectl apply \
--filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-resources.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
To handle X.509 certificate management for us, we will install cert-manager. For the purposes of this exercise, we will use cert-manager to generate our CA certificates as well as our TLS serving certificates. Install cert-manager:
$ kubectl apply \
--filename https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
For this demonstration, we will be using cert-manager to simulate our own Public Key Infrastructure (PKI). We will create the appropriate CA certificates and TLS serving certificates for the impersonation proxy to serve TLS. For more information about using cert-manager to achieve this, see the cert-manager docs.
In summary, we will do the following:
- Create two
ClusterIssuer
resources, one namedselfsigned-cluster-issuer
and another namedmy-ca-issuer
. - The
ClusterIssuer
namedmy-ca-issuer
will be used to create severalCertificate
resources. First, we will create theCertificate
calledmy-selfsigned-ca
(which will reference aSecret
namedself-signed-ca-for-kind-testing
where the actual certificate data will be stored). - We will later retrieve the
Secret
calledself-signed-ca-for-kind-testing
so that we can add the CA to the Pinniped Concierge'sCredentialIssuer
resource so that it can be advertised and used to verify TLS serving certificates. - Then, we will create the
ClusterIssuer
calledmy-ca-issuer
. We will reference theCertificate
calledmy-selfsigned-ca
via itsSecret
namedself-signed-ca-for-kind-testing
. This will allow us to use the CA to sign TLS serving certificates. - Then, we will use the
ClusterIssuer
calledmy-ca-issuer
to generate aCertificate
that will be a TLS serving certificate calledimpersonation-serving-cert
. As before, the actual certificate data will be stored in a KubernetesSecret
which we will nameimpersonation-proxy-tls-serving-cert
. - Finally, we will update the Pinniped Concierge's
CredentiaIissuer
resource to use the TLS serving certificate stored in theSecret
calledimpersonation-proxy-tls-serving-cert
.
If all goes well, the Impersonation Proxy endpoints will be served with a TLS serving certificate that can be validated by the CA certificate that generated it. That's a lot! Fortunately, the majority of the work is done painlessly via the following simple commands:
$ cat << EOF > self-signed-cert.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-cluster-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-selfsigned-ca
namespace: cert-manager
spec:
isCA: true
commonName: my-selfsigned-ca
secretName: self-signed-ca-for-kind-testing
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-cluster-issuer
kind: ClusterIssuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: my-ca-issuer
spec:
ca:
secretName: self-signed-ca-for-kind-testing
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: impersonation-serving-cert
namespace: pinniped-concierge
spec:
secretName: impersonation-proxy-tls-serving-cert
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- Pinniped
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
dnsNames:
- impersonation-proxy-mtls.local
issuerRef:
name: my-ca-issuer
kind: ClusterIssuer
group: cert-manager.io
EOF
$ kubectl apply \
--filename self-signed-cert.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
Download the root (self-signed) CA's certificate. We will be adding it to the Pinniped Concierge's CredentialIssuer
resource
in order to configure the impersonation proxy to advertise the certificate as its CA.
$ kubectl get secret self-signed-ca-for-kind-testing \
--namespace cert-manager \
--output jsonpath="{.data.ca\.crt}" \
--kubeconfig kind-with-contour.kubeconfig.yaml \
| tee self-signed-ca-for-kind-testing.pem.b64
# Tip: Put the contents of self-signed-ca-for-kind-testing.pem.b64 into your copy buffer for a later step!
The CredentialIssuer
resource called pinniped-concierge-config
already exists. We need to edit it.
Kind clusters do not need to use the impersonation proxy by default (it is designed for public cloud providers),
so we will make several changes to this resource:
- Set the
spec.impersonationProxy.mode: enabled
- Set the
spec.impersonationProxy.tls.certificateAuthorityData
to match the certificate namedmy-ca-issuer
which stores its certificate data in theSecret
calledself-signed-ca-for-kind-testing
(which we previously recorded in the fileself-signed-ca-for-kind-testing.pem.b64
)
$ kubectl edit credentialissuer pinniped-concierge-config \
--kubeconfig kind-with-contour.kubeconfig.yaml
Make sure that the spec has the following values:
spec:
impersonationProxy:
externalEndpoint: impersonation-proxy-mtls.local
mode: enabled
service:
type: ClusterIP
tls:
certificateAuthorityData: # paste the contents of the file self-signed-ca-for-kind-testing.pem.b64
secretName: impersonation-proxy-tls-serving-cert
Then save and close the text editor. Once saved, get the resource again and verify that the contents are correct:
# Confirm that the CredentialIssuer looks as expected
$ kubectl get credentialissuers pinniped-concierge-config \
--output yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
Ensuring the following:
spec:
impersonationProxy:
externalEndpoint: impersonation-proxy-mtls.local
mode: enabled
service:
annotations:
# Ignore any annotations
type: ClusterIP
tls:
certificateAuthorityData: LS0tLUJFR0l..........
secretName: impersonation-proxy-tls-serving-cert
status:
strategies:
# this strategy should be automatically updated with the configured
# spec.tls.certificateAuthorityData from the previous step
- frontend:
impersonationProxyInfo:
certificateAuthorityData: LS0tLUJFR0l..........
In the CredentialIssuer
status.strategies
there should be a frontend
strategy with a impersonationProxyInfo.certificateAuthorityData
value that matches that of the configured spec.tls.certificateAuthorityData
. This is how the CredentialIssuer advertises
its CA bundle.
Next, we review our Service
configuration.
# Confirm that the ClusterIP service for the impersonation proxy was automatically created (may take a minute)
$ kubectl get service pinniped-concierge-impersonation-proxy-cluster-ip \
--namespace pinniped-concierge \
--output yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
Configure a webhook authenticator to tell Concierge to validate static tokens using the installed local-user-authenticator.
When we installed the Pinniped local-user-authenticator, we created a service called local-user-authenticator in the
local-user-authenticator namespace. We previously retrieved the Secret named local-user-authenticator-tls-serving-certificate
so that we could use it to configure this WebhookAuthenticator
to use that certificate. Note that we did not generate this
certificate via cert-manager, this is still a self-signed certificate created by Pinniped.
The endpoint
here is referenced via Kubernetes DNS in the format <namespace>.<service-name>.svc
targeting the /authenticate
endpoint of the local-user-authenticator. We will be using https, if course.
# Configure a webhook authenticator to tell Concierge to validate static tokens using the installed local-user-authenticator
$ cat << EOF > concierge.webhookauthenticator.yaml
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: WebhookAuthenticator
metadata:
name: local-user-authenticator
spec:
endpoint: https://local-user-authenticator.local-user-authenticator.svc/authenticate
tls:
certificateAuthorityData: $(cat local-user-authenticator-ca.pem.b64)
EOF
# Create the webhook authenticator
$ kubectl apply \
--filename concierge.webhookauthenticator.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
Now deploy a Contour HTTPProxy
ingress that fronts the ClusterIP
service for the impersonation proxy.
We need to use TLS passthrough in this case, so that the client (kubectl and the pinniped CLI) can establish TLS directly with the impersonation proxy, and so that client certs used for mTLS will be sent to the impersonation proxy.
Note in particular the spec.tcpproxy
block, which is different than the typical spec.rules
block.
spec.tcpproxy
is required when using spec.virtualhost.tls.passthrough: true
.
See contour docs for tls session passthrough for more details.
$ cat << EOF > contour-ingress-impersonation-proxy.yaml
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: impersonation-proxy
namespace: pinniped-concierge
spec:
virtualhost:
fqdn: impersonation-proxy-mtls.local
tls:
passthrough: true
tcpproxy:
services:
- name: pinniped-concierge-impersonation-proxy-cluster-ip
port: 443
EOF
$ kubectl apply \
--filename contour-ingress-impersonation-proxy.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
Now to generate the Pinniped kubeconfig so that you can perform mTLS with the impersonation proxy.
Since we are interacting with a kind cluster, we will need to ensure HTTP requests are routed to the cluster.
In this example, we will edit the /etc/hosts
file to resolve the impersonation-proxy-mtls.local
to localhost
via 127.0.0.1
.
##
# Host Database
127.0.0.1 impersonation-proxy-mtls.local
Note that using a static-token does embed those credentials into your kubeconfig. This is not suitable for production
deployment. As we said before, we are using local-user-authenticator as a simple identity provider for illustrative purposes
only. In a real production use case you would not employ the --static-token
flag which would ensure credentials are not
embedded in your kubeconfig, an important security feature. Never use local-user-authenticator in production.
# be sure you added 127.0.0.1 impersonation-proxy-mtls.local to your /etc/hosts!
$ pinniped get kubeconfig \
--static-token "pinny:password123" \
--concierge-authenticator-type webhook \
--concierge-authenticator-name local-user-authenticator \
--concierge-mode ImpersonationProxy \
--kubeconfig kind-with-contour.kubeconfig.yaml \
> pinniped-kubeconfig.yaml
Now perform an action as user pinny!
$ kubectl get pods -A \
--kubeconfig pinniped-kubeconfig.yaml
Error from server (Forbidden): pods is forbidden: User "pinny" cannot list resource "pods" in API group "" at the cluster scope: decision made by impersonation-proxy.concierge.pinniped.dev
This does result in an error because the cluster does not have any RoleBindings
or ClusterRoleBindings
that allow your user pinny or the group group-for-mtls
to perform any actions on the cluster.
Let’s make a ClusterRoleBinding
that grants this group cluster admin privileges.
# Perform this as the cluster admin using the kind kubeconfig
$ kubectl create clusterrolebinding mtls-admins \
--clusterrole=cluster-admin \
--group=group-for-mtls \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Now try again with the Pinniped kubeconfig
$ kubectl get pods -A \
--kubeconfig pinniped-kubeconfig.yaml
NAMESPACE NAME READY STATUS RESTARTS AGE
pinniped-concierge pinniped-concierge-f4c78b674-bt6zl 1/1 Running 0 3h36m
Congratulations, you have successfully performed mTLS authentication between your local client (kubectl, using the pinniped CLI) and the impersonation proxy inside the cluster.
To verify that your username and groups are visible to Kubernetes, run the pinniped whoami
command.
pinniped whoami \
--kubeconfig pinniped-kubeconfig.yaml
Finally, verify the expected outcome:
- View the CA embedded in your kubeconfig file:
cat pinniped-kubeconfig.yaml | yq ".clusters[0].cluster.certificate-authority-data"
- View the CA provided to the impersonation proxy:
kubectl get CredentialIssuer pinniped-concierge-config -o jsonpath="{.status.strategies[1].frontend.impersonationProxyInfo.certificateAuthorityData}"
- View the CA we stored in our local PEM file:
cat self-signed-ca-for-kind-testing.pem.b64