Another draft of the new tutorial guide

This commit is contained in:
Ryan Richard 2022-02-14 17:23:57 -08:00
parent 05ec8cba8c
commit 230e563ab7
3 changed files with 168 additions and 53 deletions

View File

@ -68,9 +68,9 @@ A single Pinniped Supervisor can provide authentication for any number of Kubern
- A single Supervisor is deployed on a special cluster where app developers and devops users have no access. - A single Supervisor is deployed on a special cluster where app developers and devops users have no access.
App developers and devops users should have no access at least to the resources in the Supervisor's namespace, App developers and devops users should have no access at least to the resources in the Supervisor's namespace,
but usually have no access to the whole cluster. For this tutorial, let's call this cluster the "supervisor cluster". but usually have no access to the whole cluster. For this tutorial, let's call this cluster the *"supervisor cluster"*.
- App developers and devops users can then use their identities provided by the Supervisor to log in to many - App developers and devops users can then use their identities provided by the Supervisor to log in to many
clusters where they can manage their apps. For this tutorial, let's call these clusters the "workload clusters". clusters where they can manage their apps. For this tutorial, let's call these clusters the *"workload clusters"*.
The Pinniped Concierge component is installed into each workload cluster and is configured to trust the single Supervisor. The Pinniped Concierge component is installed into each workload cluster and is configured to trust the single Supervisor.
The Concierge acts as an in-cluster agent to provide authentication services. The Concierge acts as an in-cluster agent to provide authentication services.
@ -181,23 +181,27 @@ KUBECONFIG="workload2-admin.yaml" gcloud container clusters get-credentials \
### Decide which hostname and domain or subdomain will be used for the Supervisor ### Decide which hostname and domain or subdomain will be used for the Supervisor
The Pinniped maintainers own the pinniped.dev domain and have already set it up for use with Google Cloud DNS, The Pinniped maintainers own the pinniped.dev domain and have already set it up for use with Google Cloud DNS,
so for this tutorial we will call our Supervisor server demo-supervisor.pinniped.dev. so for this tutorial we will call our Supervisor server `demo-supervisor.pinniped.dev`.
### Install the Pinniped Supervisor on the supervisor cluster ### Install the Pinniped Supervisor on the supervisor cluster
There are several installation options described in the There are several installation options described in the
[howto guide for installing the Supervisor]({{< ref "../howto/install-supervisor" >}}). [howto guide for installing the Supervisor]({{< ref "../howto/install-supervisor" >}}).
For this tutorial, we will install the latest version using the `kapp` CLI. For this tutorial, we will install the latest version using the `kubectl` CLI.
```sh ```sh
kapp deploy --app pinniped-supervisor \ kubectl apply \
--file https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-supervisor.yaml \ -f https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-supervisor.yaml \
--yes --kubeconfig supervisor-admin.yaml --kubeconfig supervisor-admin.yaml
``` ```
### Create a LoadBalancer Service for the Supervisor ### Create a LoadBalancer Service for the Supervisor
Create a LoadBalancer to expose the Supervisor service to the public, being careful to only There are several options for exposing the Supervisor's endpoints outside the cluster, which are described in the
[howto guide for configuring the Supervisor]({{< ref "../howto/configure-supervisor" >}}). For this tutorial,
we will use a public LoadBalancer.
Create a LoadBalancer to expose the Supervisor's endpoints to the public, being careful to only
expose the HTTPS endpoint (not the HTTP endpoint). expose the HTTPS endpoint (not the HTTP endpoint).
```sh ```sh
@ -218,13 +222,14 @@ spec:
EOF EOF
``` ```
It may take a little time for the LoadBalancer to be assigned a public IP. Check for an IP using the following command. The value returned
Check for an `EXTERNAL-IP` using the following command. The value of the is the public IP of you LoadBalancer, which will be used
`EXTERNAL-IP` is the public IP of you LoadBalancer, which will be used in the steps below. It may take a little time for the LoadBalancer to be assigned a public IP, and this
in the steps below. command will have empty output until then.
```sh ```sh
kubectl get service pinniped-supervisor-loadbalancer \ kubectl get service pinniped-supervisor-loadbalancer \
-o jsonpath='{.status.loadBalancer.ingress[*].ip}' \
--namespace pinniped-supervisor --kubeconfig supervisor-admin.yaml --namespace pinniped-supervisor --kubeconfig supervisor-admin.yaml
``` ```
@ -251,6 +256,7 @@ gcloud projects add-iam-policy-binding "$PROJECT" \
``` ```
Create and download a key for the new service account, and then put it into a Secret on the cluster. Create and download a key for the new service account, and then put it into a Secret on the cluster.
Be careful with this key as it allows full control over the DNS of your Cloud DNS zones.
```sh ```sh
gcloud iam service-accounts keys create demo-dns-solver-key.json \ gcloud iam service-accounts keys create demo-dns-solver-key.json \
@ -346,7 +352,7 @@ spec:
EOF EOF
``` ```
Wait for the Secret to get created. Use the following command to see if it exists. Wait for the Secret to get created. This may take a few minutes. Use the following command to see if it exists.
```sh ```sh
kubectl get secret supervisor-tls-cert \ kubectl get secret supervisor-tls-cert \
@ -458,12 +464,26 @@ kubectl get OIDCIdentityProvider okta \
There are several installation options described in the There are several installation options described in the
[howto guide for installing the Concierge]({{< ref "../howto/install-concierge" >}}). [howto guide for installing the Concierge]({{< ref "../howto/install-concierge" >}}).
For this tutorial, we will install the latest version using the `kapp` CLI. For this tutorial, we will install the latest version using the `kubectl` CLI.
```sh ```sh
kapp deploy --app pinniped-concierge \ # Install onto the first workload cluster.
--file https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge.yaml \ kubectl apply -f \
--yes --kubeconfig workload1-admin.yaml "https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-crds.yaml" \
--kubeconfig workload1-admin.yaml
kubectl apply -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-resources.yaml" \
--kubeconfig workload1-admin.yaml
# Install onto the second workload cluster.
kubectl apply -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-crds.yaml" \
--kubeconfig workload2-admin.yaml
kubectl apply -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-resources.yaml" \
--kubeconfig workload2-admin.yaml
``` ```
Configure the Concierge on the first workload cluster to trust the Supervisor's Configure the Concierge on the first workload cluster to trust the Supervisor's
@ -507,19 +527,39 @@ EOF
### Configure RBAC rules for the developer and devops users ### Configure RBAC rules for the developer and devops users
For this tutorial, we will keep the Kubernetes RBAC configuration simple. For example, For this tutorial, we will keep the Kubernetes RBAC configuration simple.
if one of your Okta users has the email address `walrus@example.com`, We'll use a contrived example of RBAC policies to avoid getting into RBAC policy design discussions.
If one of your Okta users has the email address `walrus@example.com`,
then you could allow that user to [edit](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) then you could allow that user to [edit](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
most things in one workload cluster, things in a new namespace in one workload cluster,
and [view](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and [view](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
most things in the other workload cluster, with the following commands. most things in the other workload cluster, with the following commands.
```sh ```sh
kubectl create clusterrolebinding developer-can-edit \ # Create a namespace in the first workload cluster.
--clusterrole edit \ kubectl create namespace "dev" \
--user walrus@example.com \
--kubeconfig workload1-admin.yaml --kubeconfig workload1-admin.yaml
# Allow the developer to edit everything in the new namespace.
cat <<EOF | kubectl create --kubeconfig workload1-admin.yaml -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-can-edit-dev-ns
namespace: dev
subjects:
- kind: User
name: walrus@example.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
EOF
# In the second workload cluster, allow the developer
# to view everything in all namespaces.
kubectl create clusterrolebinding developer-can-view \ kubectl create clusterrolebinding developer-can-view \
--clusterrole view \ --clusterrole view \
--user walrus@example.com \ --user walrus@example.com \
@ -535,16 +575,17 @@ used by the developer and devops users. These commands should be run using the a
kubeconfigs of the workload clusters, and they will output the new Pinniped-compatible kubeconfigs of the workload clusters, and they will output the new Pinniped-compatible
kubeconfigs for the workload clusters. kubeconfigs for the workload clusters.
```sh The `--kubeconfig` and `--kubeconfig-context` options, along with the `KUBECONFIG` environment variable,
# The optional `--kubeconfig-context` parameter names the context can help you specify how the command should find the admin kubeconfig for the cluster.
# in the resulting kubeconfig file.
The new Pinniped-compatible kubeconfig will be printed to stdout, so in these examples we will redirect
that to a file.
```sh
pinniped get kubeconfig \ pinniped get kubeconfig \
--kubeconfig-context workload1-cluster \
--kubeconfig workload1-admin.yaml > workload1-developer.yaml --kubeconfig workload1-admin.yaml > workload1-developer.yaml
pinniped get kubeconfig \ pinniped get kubeconfig \
--kubeconfig-context workload2-cluster \
--kubeconfig workload2-admin.yaml > workload2-developer.yaml --kubeconfig workload2-admin.yaml > workload2-developer.yaml
``` ```
@ -557,6 +598,22 @@ Save the admin kubeconfig files somewhere private and secure for your own future
See the [full documentation for the `pinniped get kubeconfig` command]({{< ref "../reference/cli" >}}) See the [full documentation for the `pinniped get kubeconfig` command]({{< ref "../reference/cli" >}})
for other available optional parameters. for other available optional parameters.
### Optional: Merge the developer kubeconfig files to distribute them as one file
The `kubectl` CLI [can merge kubeconfig files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#merging-kubeconfig-files).
If you wanted to distribute one kubeconfig file instead of one per cluster,
you could choose to merge the Pinniped-compatible kubeconfig files.
```sh
# For this command, KUBECONFIG is treated as a list of input files.
KUBECONFIG="workload1-developer.yaml:workload2-developer.yaml" kubectl \
config view --flatten -o yaml > all-workload-clusters-developer.yaml
```
The developer who uses the combined kubeconfig file will need to use the standard `kubectl` methods to choose their current context.
For clarity, the steps shown below will continue to use the separate kubeconfig files.
### As a developer or devops user, access the workload clusters by using regular kubectl commands ### As a developer or devops user, access the workload clusters by using regular kubectl commands
A developer or devops user who would like to use the workload clusters may do so using kubectl with A developer or devops user who would like to use the workload clusters may do so using kubectl with
@ -576,11 +633,24 @@ kubectl get namespaces --kubeconfig workload1-developer.yaml
The first time this command is run, it will open their default web browser and redirect them to Okta for login. The first time this command is run, it will open their default web browser and redirect them to Okta for login.
After successfully logging in to Okta, for example as the user `walrus@example.com`, the kubectl command will After successfully logging in to Okta, for example as the user `walrus@example.com`, the kubectl command will
continue and will list the namespaces. continue and will try to list the namespaces.
The user's identity in Kubernetes (username and group memberships) came from Okta, through Pinniped. The user's identity in Kubernetes (username and group memberships) came from Okta, through Pinniped.
Oops! This results in an RBAC error similar to
`Error from server (Forbidden): namespaces is forbidden: User "walrus@example.com" cannot list resource "namespaces" in API group "" at the cluster scope`.
Recall that in the first workload cluster, the user only has RBAC permissions in the `dev` namespace.
Let's try again, but this time we will list something in the `dev` namespace.
```sh
kubectl get serviceaccounts --namespace dev \
--kubeconfig workload1-developer.yaml
```
This will successfully list the default service account in the `dev` namespace.
That same developer user can access all other workload clusters in a similar fashion. For example, That same developer user can access all other workload clusters in a similar fashion. For example,
let's run a command against the second workload cluster. let's run a command against the second workload cluster. Recall that the developer is allowed
to read everthing in the second workload cluster.
```sh ```sh
kubectl get namespaces --kubeconfig workload2-developer.yaml kubectl get namespaces --kubeconfig workload2-developer.yaml
@ -594,16 +664,34 @@ Behind the scenes, Pinniped is performing token refreshes and token exchanges
on behalf of the user to create a short-lived, cluster-scoped token to access on behalf of the user to create a short-lived, cluster-scoped token to access
this new workload cluster using the same identity from Okta. this new workload cluster using the same identity from Okta.
If the user did not have RBAC permissions to perform the requested action, then they would see an error
from kubectl similar to
`Error from server (Forbidden): namespaces is forbidden: User "walrus@example.com" cannot list resource "namespaces" in API group "" `.
Note that users can use any of kubectl's supported means of providing kubeconfig information to kubectl. Note that users can use any of kubectl's supported means of providing kubeconfig information to kubectl.
They are not limited to only using the `--kubeconfig` flag. For example, they could set the `KUBECONFIG` They are not limited to only using the `--kubeconfig` flag. For example, they could set the `KUBECONFIG`
environment variable instead. environment variable instead.
For more information about logging in to workload clusters, see the [howto doc about login]({{< ref "../howto/login" >}}). For more information about logging in to workload clusters, see the [howto doc about login]({{< ref "../howto/login" >}}).
### Whoami
Not sure what identity you're using on the cluster? Pinniped has a convenient feature to help out with that.
```sh
pinniped whoami --kubeconfig workload2-developer.yaml
```
The output will include your username and group names, and will look similar to the following output.
```
Current cluster info:
Name: gke_your_project_us-central1-c_demo-workload-cluster2-pinniped
URL: https://1.2.3.4
Current user info:
Username: walrus@example.com
Groups: Everyone, developers, system:authenticated
```
## What we've learned ## What we've learned
This tutorial showed: This tutorial showed:
@ -618,13 +706,33 @@ This tutorial showed:
If you would like to delete the resources created in this tutorial, you can use the following commands. If you would like to delete the resources created in this tutorial, you can use the following commands.
```sh ```sh
# To uninstall the Pinniped Supervisor app and all related configuration: # To uninstall the Pinniped Supervisor app and all related configuration
kapp delete --app pinniped-supervisor --yes --kubeconfig supervisor-admin.yaml # (including the GCP load balancer):
kubectl delete \
-f "https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-supervisor.yaml" \
--kubeconfig supervisor-admin.yaml
# To uninstall cert-manager (assuming you already ran the above command):
kubectl delete -f \
"https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml" \
--kubeconfig supervisor-admin.yaml
# To uninstall the Pinniped Concierge apps and all related configuration: # To uninstall the Pinniped Concierge apps and all related configuration:
kapp delete --app pinniped-concierge --yes --kubeconfig workload1-admin.yaml kubectl delete -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-resources.yaml" \
--kubeconfig workload1-admin.yaml
kapp delete --app pinniped-concierge --yes --kubeconfig workload2-admin.yaml kubectl delete -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-crds.yaml" \
--kubeconfig workload1-admin.yaml
kubectl delete -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-resources.yaml" \
--kubeconfig workload2-admin.yaml
kubectl delete -f \
"https://get.pinniped.dev/{{< latestversion >}}/install-pinniped-concierge-crds.yaml" \
--kubeconfig workload2-admin.yaml
# To delete the GKE clusters entirely: # To delete the GKE clusters entirely:
gcloud container clusters delete "demo-supervisor-cluster" \ gcloud container clusters delete "demo-supervisor-cluster" \
@ -647,7 +755,11 @@ gcloud dns record-sets transaction remove "$PUBLIC_IP" \
gcloud dns record-sets transaction execute \ gcloud dns record-sets transaction execute \
--zone="$DNS_ZONE" --project "$PROJECT" --zone="$DNS_ZONE" --project "$PROJECT"
# To delete the service account created above for cert-manager: # To delete the service account we created for cert-manager:
gcloud projects remove-iam-policy-binding "$PROJECT" \
--member "serviceAccount:demo-dns-solver@$PROJECT.iam.gserviceaccount.com" \
--role roles/dns.admin --condition=None
gcloud iam service-accounts delete \ gcloud iam service-accounts delete \
"demo-dns-solver@$PROJECT.iam.gserviceaccount.com" \ "demo-dns-solver@$PROJECT.iam.gserviceaccount.com" \
--project "$PROJECT" --quiet --project "$PROJECT" --quiet

View File

@ -27,9 +27,12 @@ for a more specific example of installing onto a local kind cluster, including t
[JWT]({{< ref "../howto/configure-concierge-jwt" >}}) or [JWT]({{< ref "../howto/configure-concierge-jwt" >}}) or
[webhook]({{< ref "../howto/configure-concierge-webhook" >}}) authenticator. [webhook]({{< ref "../howto/configure-concierge-webhook" >}}) authenticator.
1. Generate a kubeconfig using the Pinniped command-line tool (run `pinniped get kubeconfig --help` for more information). 1. Generate a kubeconfig using the Pinniped command-line tool (run `pinniped get kubeconfig --help` for more information).
1. Run `kubectl` commands using the generated kubeconfig. 1. Run `kubectl` commands using the generated kubeconfig. The Pinniped Concierge will automatically be used for authentication during those commands.
The Pinniped Concierge is automatically be used for authentication during those commands. Please be aware that using the Concierge without the Supervisor is an advanced use case, not the typical use case.
For example, the Supervisor issues cluster-scoped credentials that cannot be replayed against other clusters,
so using the Concierge without the Supervisor removes that protection. You might have designed another system to provide
that protection, but if not then please carefully consider the security implications.
## Prerequisites ## Prerequisites

View File

@ -15,6 +15,18 @@ layout: section
<div class="grid three"> <div class="grid three">
<div class="col">
<a href="https://pinniped.dev/docs/tutorials/concierge-and-supervisor-demo/">
<div class="icon">
<img src="/img/logo.svg"/>
</div>
<div class="content">
<p class="strong">Pinniped Tutorial:</p>
<p>Learn to use Pinniped for federated authentication to Kubernetes clusters</p>
</div>
</a>
</div>
<div class="col"> <div class="col">
<a href="https://github.com/vmware-tanzu/pinniped"> <a href="https://github.com/vmware-tanzu/pinniped">
<div class="icon"> <div class="icon">
@ -84,17 +96,5 @@ layout: section
</div> </div>
</div> </div>
<div class="col">
<a href="https://pinniped.dev/docs/tutorials/concierge-and-supervisor-demo/">
<div class="icon">
<img src="/img/logo.svg"/>
</div>
<div class="content">
<p class="strong">Pinniped Tutorial:</p>
<p>Learn to use Pinniped for federated authentication to Kubernetes clusters</p>
</div>
</a>
</div>
</div> </div>
</div> </div>