Merge pull request #1606 from vmware-tanzu/jtc/add-blog-post-for-v0.25.0
Add blog post for v0.25.0
This commit is contained in:
commit
c54933bf33
@ -118,7 +118,7 @@ We've provided examples of using [OpenLDAP]({{< ref "docs/howto/install-supervis
|
|||||||
and [JumpCloud]({{< ref "docs/howto/install-supervisor.md" >}}) as LDAP providers.
|
and [JumpCloud]({{< ref "docs/howto/install-supervisor.md" >}}) as LDAP providers.
|
||||||
Stay tuned for examples of using Active Directory.
|
Stay tuned for examples of using Active Directory.
|
||||||
|
|
||||||
The `pinniped` CLI has also been enhanced to support LDAP authentication. Now when `pinnped get kubectl` sees
|
The `pinniped` CLI has also been enhanced to support LDAP authentication. Now when `pinniped get kubectl` sees
|
||||||
that your cluster's Concierge is configured to use a Supervisor which has an LDAPIdentityProvider, then it
|
that your cluster's Concierge is configured to use a Supervisor which has an LDAPIdentityProvider, then it
|
||||||
will emit the appropriate kubeconfig to enable LDAP logins. When that kubeconfig is used with `kubectl`,
|
will emit the appropriate kubeconfig to enable LDAP logins. When that kubeconfig is used with `kubectl`,
|
||||||
the Pinniped plugin will directly prompt the user on the CLI for their LDAP username and password and
|
the Pinniped plugin will directly prompt the user on the CLI for their LDAP username and password and
|
||||||
|
@ -0,0 +1,425 @@
|
|||||||
|
---
|
||||||
|
title: "Pinniped v0.25.0: With External Certificate Management for the Impersonation Proxy and more!"
|
||||||
|
slug: v0-25-0-external-cert-mgmt-impersonation-proxy
|
||||||
|
date: 2023-08-09
|
||||||
|
author: Joshua T. Casey
|
||||||
|
authors:
|
||||||
|
- Joshua T. Casey
|
||||||
|
- Benjamin A. Petersen
|
||||||
|
image: https://images.unsplash.com/photo-1618075254460-429d47b887c7?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2148&q=80
|
||||||
|
excerpt: "With v0.25.0 you get external certificate management for the impersonation proxy, easier scheduling of the kube-cert-agent, and more"
|
||||||
|
tags: ['Joshua T. Casey','Ryan Richard', 'Benjamin A. Petersen', 'release', 'kubernetes', 'pki', 'pinniped', 'tls', 'mtls', 'kind', 'contour', 'cert-manager']
|
||||||
|
---
|
||||||
|
|
||||||
|
![Friendly seal](https://images.unsplash.com/photo-1618075254460-429d47b887c7?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2148&q=80)
|
||||||
|
*Photo by [karlheinz_eckhardt Eckhardt](https://unsplash.com/@karlheinz_eckhardt) on [Unsplash](https://unsplash.com/s/photos/seal)*
|
||||||
|
|
||||||
|
With Pinniped v0.25.0 you get the ability to configure an externally-generated certificate for Pinniped Concierge's impersonation proxy to serve TLS. The
|
||||||
|
impersonation proxy is a component within Pinniped that allows the project to support many types of clusters, such as
|
||||||
|
[Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/), [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine),
|
||||||
|
and [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/overview/kubernetes-on-azure).
|
||||||
|
|
||||||
|
To read more on this feature, and the design decisions behind it, see the [proposal](https://github.com/vmware-tanzu/pinniped/tree/main/proposals/1547_impersonation-proxy-external-certs).
|
||||||
|
To read more about the impersonation proxy, see the [docs](https://pinniped.dev/docs/reference/supported-clusters/#background).
|
||||||
|
|
||||||
|
To see the feature in practice on a local kind cluster, follow these instructions.
|
||||||
|
This will perform mTLS between your local client (kubectl and the pinniped CLI) and the impersonation proxy.
|
||||||
|
|
||||||
|
The setup: We will be using a kind cluster, Contour as an ingress to the impersonation proxy, and cert-manager to generate a TLS serving cert.
|
||||||
|
The setup: We will be using a kind cluster, Contour as an ingress to the impersonation proxy, and cert-manager to generate a TLS serving cert.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
Docker desktop v1.20.1
|
||||||
|
Kind v0.20.0
|
||||||
|
Contour v1.25.2
|
||||||
|
Pinniped v0.25.0
|
||||||
|
pinniped CLI v0.25.0 (https://pinniped.dev/docs/howto/install-cli/)
|
||||||
|
cert-manager v1.12.3
|
||||||
|
````
|
||||||
|
|
||||||
|
Set up kind to run with Contour, using the example kind cluster configuration file provided by Contour.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ wget https://raw.githubusercontent.com/projectcontour/contour/main/examples/kind/kind-expose-port.yaml
|
||||||
|
# the --kubeconfig flag on the "create cluster" command will automatically export the kubeconfig file for us
|
||||||
|
$ kind create cluster \
|
||||||
|
--config kind-expose-port.yaml \
|
||||||
|
--name kind-with-contour \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we will install Contour (see https://projectcontour.io/getting-started/ for more details). Contour provides our kind
|
||||||
|
cluster with an Ingress Controller. We will later deploy a Contour HTTPProxy to create DNS that we can
|
||||||
|
use to access the impersonation proxy.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# From https://projectcontour.io/getting-started/
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename https://projectcontour.io/quickstart/contour.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
# Verify that the Contour pods are ready
|
||||||
|
$ kubectl get pods \
|
||||||
|
--namespace projectcontour \
|
||||||
|
--output wide \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Pinniped's local-user-authenticator will act as a dummy identity provider for our example. This resource is not for production
|
||||||
|
use but is sufficient for our needs to exercise the new feature of the impersonation proxy. Install Pinniped’s local-user-authenticator
|
||||||
|
and add some sample users (see https://pinniped.dev/docs/tutorials/concierge-only-demo/ for more details).
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Install Pinniped's local-user-authenticator
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename https://get.pinniped.dev/v0.25.0/install-local-user-authenticator.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
# Create a local user "pinny" with password "password123" and group "group-for-mtls".
|
||||||
|
# Each secret in this namespace acts like a user definition.
|
||||||
|
$ kubectl create secret generic pinny \
|
||||||
|
--namespace local-user-authenticator \
|
||||||
|
--from-literal=groups=group-for-mtls \
|
||||||
|
--from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://") \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
# We'll need the CA bundle of the local-user-authenticator service to configure the Concierge's WebhookAuthenticator.
|
||||||
|
# Just make sure this next command does print out the TLS secret, which can take a few seconds to generate.
|
||||||
|
$ kubectl get secret local-user-authenticator-tls-serving-certificate \
|
||||||
|
--namespace local-user-authenticator \
|
||||||
|
--output jsonpath={.data.caCertificate} \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml \
|
||||||
|
| tee local-user-authenticator-ca.pem.b64
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example, we are only interacting with the Pinniped's Concierge. The Supervisor is not in use as we are not interacting
|
||||||
|
with a real external OIDC identity provider. Install Pinniped's Concierge:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-crds.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename https://get.pinniped.dev/v0.25.0/install-pinniped-concierge-resources.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
To handle X.509 certificate management for us, we will install cert-manager. For the purposes of this exercise, we will use cert-manager
|
||||||
|
to generate our CA certificates as well as our TLS serving certificates. Install cert-manager:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
For this demonstration, we will be using cert-manager to simulate our own Public Key Infrastructure (PKI).
|
||||||
|
We will create the appropriate CA certificates and TLS serving certificates for the impersonation proxy to serve TLS.
|
||||||
|
For more information about using cert-manager to achieve this, see the [cert-manager docs](https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers).
|
||||||
|
|
||||||
|
In summary, we will do the following:
|
||||||
|
|
||||||
|
- Create two `ClusterIssuer` resources, one named `selfsigned-cluster-issuer` and another named `my-ca-issuer`.
|
||||||
|
- The `ClusterIssuer` named `my-ca-issuer` will be used to create several `Certificate` resources. First, we will create
|
||||||
|
the `Certificate` called `my-selfsigned-ca` (which will reference a `Secret` named `self-signed-ca-for-kind-testing` where
|
||||||
|
the actual certificate data will be stored).
|
||||||
|
- We will later retrieve the `Secret` called `self-signed-ca-for-kind-testing` so that we can add the CA to the Pinniped Concierge's
|
||||||
|
`CredentialIssuer` resource so that it can be advertised and used to verify TLS serving certificates.
|
||||||
|
- Then, we will create the `ClusterIssuer` called `my-ca-issuer`. We will reference the `Certificate` called `my-selfsigned-ca` via
|
||||||
|
its `Secret` named `self-signed-ca-for-kind-testing`. This will allow us to use the CA to sign TLS serving certificates.
|
||||||
|
- Then, we will use the `ClusterIssuer` called `my-ca-issuer` to generate a `Certificate` that will be a TLS serving certificate
|
||||||
|
called `impersonation-serving-cert`. As before, the actual certificate data will be stored in a Kubernetes `Secret` which we
|
||||||
|
will name `impersonation-proxy-tls-serving-cert`.
|
||||||
|
- Finally, we will update the Pinniped Concierge's `CredentiaIissuer` resource to use the TLS serving certificate stored in the
|
||||||
|
`Secret` called `impersonation-proxy-tls-serving-cert`.
|
||||||
|
|
||||||
|
If all goes well, the Impersonation Proxy endpoints will be served with a TLS serving certificate that can be validated by the
|
||||||
|
CA certificate that generated it. That's a lot! Fortunately, the majority of the work is done painlessly via the following
|
||||||
|
simple commands:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ cat << EOF > self-signed-cert.yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: cert-manager
|
||||||
|
---
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: selfsigned-cluster-issuer
|
||||||
|
spec:
|
||||||
|
selfSigned: {}
|
||||||
|
---
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: Certificate
|
||||||
|
metadata:
|
||||||
|
name: my-selfsigned-ca
|
||||||
|
namespace: cert-manager
|
||||||
|
spec:
|
||||||
|
isCA: true
|
||||||
|
commonName: my-selfsigned-ca
|
||||||
|
secretName: self-signed-ca-for-kind-testing
|
||||||
|
privateKey:
|
||||||
|
algorithm: ECDSA
|
||||||
|
size: 256
|
||||||
|
issuerRef:
|
||||||
|
name: selfsigned-cluster-issuer
|
||||||
|
kind: ClusterIssuer
|
||||||
|
group: cert-manager.io
|
||||||
|
---
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: my-ca-issuer
|
||||||
|
spec:
|
||||||
|
ca:
|
||||||
|
secretName: self-signed-ca-for-kind-testing
|
||||||
|
---
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: Certificate
|
||||||
|
metadata:
|
||||||
|
name: impersonation-serving-cert
|
||||||
|
namespace: pinniped-concierge
|
||||||
|
spec:
|
||||||
|
secretName: impersonation-proxy-tls-serving-cert
|
||||||
|
duration: 2160h # 90d
|
||||||
|
renewBefore: 360h # 15d
|
||||||
|
subject:
|
||||||
|
organizations:
|
||||||
|
- Pinniped
|
||||||
|
isCA: false
|
||||||
|
privateKey:
|
||||||
|
algorithm: RSA
|
||||||
|
encoding: PKCS1
|
||||||
|
size: 2048
|
||||||
|
usages:
|
||||||
|
- server auth
|
||||||
|
dnsNames:
|
||||||
|
- impersonation-proxy-mtls.local
|
||||||
|
issuerRef:
|
||||||
|
name: my-ca-issuer
|
||||||
|
kind: ClusterIssuer
|
||||||
|
group: cert-manager.io
|
||||||
|
|
||||||
|
EOF
|
||||||
|
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename self-signed-cert.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Download the root (self-signed) CA's certificate. We will be adding it to the Pinniped Concierge's `CredentialIssuer` resource
|
||||||
|
in order to configure the impersonation proxy to advertise the certificate as its CA.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl get secret self-signed-ca-for-kind-testing \
|
||||||
|
--namespace cert-manager \
|
||||||
|
--output jsonpath="{.data.ca\.crt}" \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml \
|
||||||
|
| tee self-signed-ca-for-kind-testing.pem.b64
|
||||||
|
|
||||||
|
# Tip: Put the contents of self-signed-ca-for-kind-testing.pem.b64 into your copy buffer for a later step!
|
||||||
|
```
|
||||||
|
|
||||||
|
The `CredentialIssuer` resource called `pinniped-concierge-config` already exists. We need to edit it.
|
||||||
|
Kind clusters do not need to use the impersonation proxy by default (it is designed for public cloud providers),
|
||||||
|
so we will make several changes to this resource:
|
||||||
|
|
||||||
|
- Set the `spec.impersonationProxy.mode: enabled`
|
||||||
|
- Set the `spec.impersonationProxy.tls.certificateAuthorityData` to match the certificate named `my-ca-issuer` which
|
||||||
|
stores its certificate data in the `Secret` called `self-signed-ca-for-kind-testing` (which we previously recorded
|
||||||
|
in the file `self-signed-ca-for-kind-testing.pem.b64`)
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl edit credentialissuer pinniped-concierge-config \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure that the spec has the following values:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
impersonationProxy:
|
||||||
|
externalEndpoint: impersonation-proxy-mtls.local
|
||||||
|
mode: enabled
|
||||||
|
service:
|
||||||
|
type: ClusterIP
|
||||||
|
tls:
|
||||||
|
certificateAuthorityData: # paste the contents of the file self-signed-ca-for-kind-testing.pem.b64
|
||||||
|
secretName: impersonation-proxy-tls-serving-cert
|
||||||
|
```
|
||||||
|
|
||||||
|
Then save and close the text editor. Once saved, get the resource again and verify that the contents are correct:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Confirm that the CredentialIssuer looks as expected
|
||||||
|
$ kubectl get credentialissuers pinniped-concierge-config \
|
||||||
|
--output yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensuring the following:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
impersonationProxy:
|
||||||
|
externalEndpoint: impersonation-proxy-mtls.local
|
||||||
|
mode: enabled
|
||||||
|
service:
|
||||||
|
annotations:
|
||||||
|
# Ignore any annotations
|
||||||
|
type: ClusterIP
|
||||||
|
tls:
|
||||||
|
certificateAuthorityData: LS0tLUJFR0l..........
|
||||||
|
secretName: impersonation-proxy-tls-serving-cert
|
||||||
|
status:
|
||||||
|
strategies:
|
||||||
|
# this strategy should be automatically updated with the configured
|
||||||
|
# spec.tls.certificateAuthorityData from the previous step
|
||||||
|
- frontend:
|
||||||
|
impersonationProxyInfo:
|
||||||
|
certificateAuthorityData: LS0tLUJFR0l..........
|
||||||
|
```
|
||||||
|
|
||||||
|
In the `CredentialIssuer` `status.strategies` there should be a `frontend` strategy with a `impersonationProxyInfo.certificateAuthorityData`
|
||||||
|
value that matches that of the configured `spec.tls.certificateAuthorityData`. This is how the CredentialIssuer advertises
|
||||||
|
its CA bundle.
|
||||||
|
|
||||||
|
Next, we review our `Service` configuration.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Confirm that the ClusterIP service for the impersonation proxy was automatically created (may take a minute)
|
||||||
|
$ kubectl get service pinniped-concierge-impersonation-proxy-cluster-ip \
|
||||||
|
--namespace pinniped-concierge \
|
||||||
|
--output yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure a webhook authenticator to tell Concierge to validate static tokens using the installed local-user-authenticator.
|
||||||
|
When we installed the Pinniped local-user-authenticator, we created a service called local-user-authenticator in the
|
||||||
|
local-user-authenticator namespace. We previously retrieved the Secret named `local-user-authenticator-tls-serving-certificate`
|
||||||
|
so that we could use it to configure this `WebhookAuthenticator` to use that certificate. Note that we did not generate this
|
||||||
|
certificate via cert-manager, this is still a self-signed certificate created by Pinniped.
|
||||||
|
|
||||||
|
The `endpoint` here is referenced via Kubernetes DNS in the format `<namespace>.<service-name>.svc` targeting the `/authenticate`
|
||||||
|
endpoint of the local-user-authenticator. We will be using https, if course.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Configure a webhook authenticator to tell Concierge to validate static tokens using the installed local-user-authenticator
|
||||||
|
$ cat << EOF > concierge.webhookauthenticator.yaml
|
||||||
|
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
|
||||||
|
kind: WebhookAuthenticator
|
||||||
|
metadata:
|
||||||
|
name: local-user-authenticator
|
||||||
|
spec:
|
||||||
|
endpoint: https://local-user-authenticator.local-user-authenticator.svc/authenticate
|
||||||
|
tls:
|
||||||
|
certificateAuthorityData: $(cat local-user-authenticator-ca.pem.b64)
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Create the webhook authenticator
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename concierge.webhookauthenticator.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now deploy a Contour `HTTPProxy` ingress that fronts the `ClusterIP` service for the impersonation proxy.
|
||||||
|
|
||||||
|
We need to use TLS passthrough in this case, so that the client (kubectl and the pinniped CLI) can establish TLS directly
|
||||||
|
with the impersonation proxy, and so that client certs used for mTLS will be sent to the impersonation proxy.
|
||||||
|
|
||||||
|
Note in particular the `spec.tcpproxy` block, which is different than the typical `spec.rules` block.
|
||||||
|
`spec.tcpproxy` is required when using `spec.virtualhost.tls.passthrough: true`.
|
||||||
|
|
||||||
|
See [contour docs for tls session passthrough](https://projectcontour.io/docs/1.25/config/tls-termination/#tls-session-passthrough) for more details.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ cat << EOF > contour-ingress-impersonation-proxy.yaml
|
||||||
|
---
|
||||||
|
apiVersion: projectcontour.io/v1
|
||||||
|
kind: HTTPProxy
|
||||||
|
metadata:
|
||||||
|
name: impersonation-proxy
|
||||||
|
namespace: pinniped-concierge
|
||||||
|
spec:
|
||||||
|
virtualhost:
|
||||||
|
fqdn: impersonation-proxy-mtls.local
|
||||||
|
tls:
|
||||||
|
passthrough: true
|
||||||
|
tcpproxy:
|
||||||
|
services:
|
||||||
|
- name: pinniped-concierge-impersonation-proxy-cluster-ip
|
||||||
|
port: 443
|
||||||
|
EOF
|
||||||
|
|
||||||
|
$ kubectl apply \
|
||||||
|
--filename contour-ingress-impersonation-proxy.yaml \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now to generate the Pinniped kubeconfig so that you can perform mTLS with the impersonation proxy.
|
||||||
|
|
||||||
|
Since we are interacting with a kind cluster, we will need to ensure HTTP requests are routed to the cluster.
|
||||||
|
In this example, we will edit the `/etc/hosts` file to resolve the `impersonation-proxy-mtls.local` to `localhost` via `127.0.0.1`.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
##
|
||||||
|
# Host Database
|
||||||
|
127.0.0.1 impersonation-proxy-mtls.local
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that using a static-token does embed those credentials into your kubeconfig. This is not suitable for production
|
||||||
|
deployment. As we said before, we are using local-user-authenticator as a simple identity provider for illustrative purposes
|
||||||
|
only. In a real production use case you would not employ the `--static-token` flag which would ensure credentials are not
|
||||||
|
embedded in your kubeconfig, an important security feature. Never use local-user-authenticator in production.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# be sure you added 127.0.0.1 impersonation-proxy-mtls.local to your /etc/hosts!
|
||||||
|
$ pinniped get kubeconfig \
|
||||||
|
--static-token "pinny:password123" \
|
||||||
|
--concierge-authenticator-type webhook \
|
||||||
|
--concierge-authenticator-name local-user-authenticator \
|
||||||
|
--concierge-mode ImpersonationProxy \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml \
|
||||||
|
> pinniped-kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now perform an action as user pinny!
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ kubectl get pods -A \
|
||||||
|
--kubeconfig pinniped-kubeconfig.yaml
|
||||||
|
Error from server (Forbidden): pods is forbidden: User "pinny" cannot list resource "pods" in API group "" at the cluster scope: decision made by impersonation-proxy.concierge.pinniped.dev
|
||||||
|
```
|
||||||
|
|
||||||
|
This does result in an error because the cluster does not have any `RoleBindings` or `ClusterRoleBindings` that allow your user pinny or the group `group-for-mtls` to perform any actions on the cluster.
|
||||||
|
Let’s make a `ClusterRoleBinding` that grants this group cluster admin privileges.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Perform this as the cluster admin using the kind kubeconfig
|
||||||
|
$ kubectl create clusterrolebinding mtls-admins \
|
||||||
|
--clusterrole=cluster-admin \
|
||||||
|
--group=group-for-mtls \
|
||||||
|
--kubeconfig kind-with-contour.kubeconfig.yaml
|
||||||
|
# Now try again with the Pinniped kubeconfig
|
||||||
|
$ kubectl get pods -A \
|
||||||
|
--kubeconfig pinniped-kubeconfig.yaml
|
||||||
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
|
pinniped-concierge pinniped-concierge-f4c78b674-bt6zl 1/1 Running 0 3h36m
|
||||||
|
```
|
||||||
|
|
||||||
|
Congratulations, you have successfully performed mTLS authentication between your local client (kubectl, using the pinniped CLI)
|
||||||
|
and the impersonation proxy inside the cluster.
|
||||||
|
|
||||||
|
To verify that your username and groups are visible to Kubernetes, run the `pinniped whoami` command.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
pinniped whoami \
|
||||||
|
--kubeconfig pinniped-kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, verify the expected outcome:
|
||||||
|
|
||||||
|
- View the CA embedded in your kubeconfig file: `cat pinniped-kubeconfig.yaml | yq ".clusters[0].cluster.certificate-authority-data"`
|
||||||
|
- View the CA provided to the impersonation proxy: `kubectl get CredentialIssuer pinniped-concierge-config -o jsonpath="{.status.strategies[1].frontend.impersonationProxyInfo.certificateAuthorityData}"`
|
||||||
|
- View the CA we stored in our local PEM file: `cat self-signed-ca-for-kind-testing.pem.b64`
|
File diff suppressed because one or more lines are too long
@ -326,6 +326,12 @@ pre code {
|
|||||||
font-family: $metropolis-medium;
|
font-family: $metropolis-medium;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
.blog-post-card {
|
||||||
|
p.no-margin {
|
||||||
|
margin-block-start: 0;
|
||||||
|
margin-block-end: 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
.getting-started {
|
.getting-started {
|
||||||
|
@ -9,9 +9,7 @@
|
|||||||
<div class="wrapper blog">
|
<div class="wrapper blog">
|
||||||
<div class="blog-post">
|
<div class="blog-post">
|
||||||
<h2>{{ .Title }}</h2>
|
<h2>{{ .Title }}</h2>
|
||||||
<p class="author">
|
{{ partial "authors" .}}
|
||||||
<a href="/tags/{{ .Params.author | urlize }}">{{ .Params.author }}</a>
|
|
||||||
</p>
|
|
||||||
<p class="date">{{ dateFormat "Jan 2, 2006" .Date }}</p>
|
<p class="date">{{ dateFormat "Jan 2, 2006" .Date }}</p>
|
||||||
{{ .Content }}
|
{{ .Content }}
|
||||||
</div>
|
</div>
|
||||||
@ -29,6 +27,3 @@
|
|||||||
</main>
|
</main>
|
||||||
{{ partial "getting-started" . }}
|
{{ partial "getting-started" . }}
|
||||||
{{ end }}
|
{{ end }}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
11
site/themes/pinniped/layouts/partials/authors.html
Normal file
11
site/themes/pinniped/layouts/partials/authors.html
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
<p class="author">
|
||||||
|
{{ if (isset .Params "authors") }}
|
||||||
|
{{ $authsCount := .Params.authors | len }}
|
||||||
|
{{ $commaCount := sub $authsCount 2 }}
|
||||||
|
{{ range $i, $author := .Params.authors -}}
|
||||||
|
<a href="/tags/{{ $author | urlize }}">{{ $author }}</a>{{ if le $i $commaCount }},{{ end }}
|
||||||
|
{{ end }}
|
||||||
|
{{ else }}
|
||||||
|
<a href="/tags/{{ .Params.author | urlize }}">{{ .Params.author }}</a>
|
||||||
|
{{ end }}
|
||||||
|
</p>
|
@ -1,4 +1,4 @@
|
|||||||
<div class="col">
|
<div class="col blog-post-card">
|
||||||
<div class="icon">
|
<div class="icon">
|
||||||
<img src="{{ .Params.Image }}" alt="{{ .Title }}" />
|
<img src="{{ .Params.Image }}" alt="{{ .Title }}" />
|
||||||
</div>
|
</div>
|
||||||
@ -7,4 +7,4 @@
|
|||||||
<time>{{ .Date.Format "January 2, 2006" }}</time>
|
<time>{{ .Date.Format "January 2, 2006" }}</time>
|
||||||
<p>{{ .Params.Excerpt }}</p>
|
<p>{{ .Params.Excerpt }}</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
Loading…
Reference in New Issue
Block a user