New resource naming conventions: - Do not repeat the Kind in the name, e.g. do not call it foo-cluster-role-binding, just call it foo - Names will generally start with a prefix to identify our component, so when a user lists all objects of that kind, they can tell to which component it is related, e.g. `kubectl get configmaps` would list one named "pinniped-config" - It should be possible for an operator to make the word "pinniped" mostly disappear if they choose, by specifying the app_name in values.yaml, to the extent that is practical (but not from APIService names because those are hardcoded in golang) - Each role/clusterrole and its corresponding binding have the same name - Pinniped resource names that must be known by the server golang code are passed to the code at run time via ConfigMap, rather than hardcoded in the golang code. This also allows them to be prepended with the app_name from values.yaml while creating the ConfigMap. - Since the CLI `get-kubeconfig` command cannot guess the name of the CredentialIssuerConfig resource in advance anymore, it lists all CredentialIssuerConfig in the app's namespace and returns an error if there is not exactly one found, and then uses that one regardless of its name
6.1 KiB
Trying Pinniped
Prerequisites
-
A Kubernetes cluster of a type supported by Pinniped. Currently, Pinniped supports self-hosted clusters where the Kube Controller Manager pod is accessible from Pinniped's pods. Support for other types of Kubernetes distributions is coming soon.
Don't have a cluster handy? Consider using kind on your local machine. See below for an example of using kind.
-
A kubeconfig where the current context points to that cluster and has admin-like privileges on that cluster.
Don't have an identity provider of a type supported by Pinniped handy? Start by installing
local-user-authenticator
on the same cluster where you would like to try Pinniped by following the directions in deploy-local-user-authenticator/README.md. See below for an example of deploying this on kind.
Steps
General Steps
- Install Pinniped by following the directions in deploy/README.md.
- Download the Pinniped CLI from Pinniped's github Releases page.
- Generate a kubeconfig using the Pinniped CLI. Run
pinniped get-kubeconfig --help
for more information. - Run
kubectl
commands using the generated kubeconfig to authenticate using Pinniped during those commands.
Specific Example of Deploying on kind Using local-user-authenticator
as the Identity Provider
-
Install the tools required for the following steps.
-
This example deployment uses
ytt
andkapp
from Carvel to template the YAML files and to deploy the app. Either installytt
andkapp
or use the container image from Dockerhub. E.g.brew install k14s/tap/ytt k14s/tap/kapp
on a Mac. -
Install kind, if not already installed. e.g.
brew install kind
on a Mac. -
kind depends on Docker. If not already installed, install Docker, e.g.
brew cask install docker
on a Mac. -
This demo requires
kubectl
, which comes with Docker, or can be installed separately. -
This demo requires a tool capable of generating a
bcrypt
hash in order to interact with the webhook. The example below useshtpasswd
, which is installed on most macOS systems, and can be installed on some Linux systems via theapache2-utils
package (e.g.,apt-get install apache2-utils
).
-
-
Create a new Kubernetes cluster using
kind create cluster
. Optionally provide a cluster name using the--name
flag. kind will automatically update your kubeconfig to point to the new cluster. -
Clone this repo.
git clone https://github.com/vmware-tanzu/pinniped.git /tmp/pinniped --depth 1
-
Deploy the
local-user-authenticator
app:cd /tmp/pinniped/deploy-local-user-authenticator ytt --file . | kapp deploy --yes --app local-user-authenticator --diff-changes --file -
-
Create a test user.
kubectl create secret generic pinny-the-seal \ --namespace local-user-authenticator \ --from-literal=groups=group1,group2 \ --from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://")
-
Fetch the auto-generated CA bundle for the
local-user-authenticator
's HTTP TLS endpoint.kubectl get secret local-user-authenticator-tls-serving-certificate --namespace local-user-authenticator \ -o jsonpath={.data.caCertificate} \ | tee /tmp/local-user-authenticator-ca-base64-encoded
-
Deploy Pinniped.
cd /tmp/pinniped/deploy ytt --file . \ --data-value "webhook_url=https://local-user-authenticator.local-user-authenticator.svc/authenticate" \ --data-value "webhook_ca_bundle=$(cat /tmp/local-user-authenticator-ca-base64-encoded)" \ | kapp deploy --yes --app pinniped --diff-changes --file -
-
Download the latest version of the Pinniped CLI binary for your platform from Pinniped's github Releases page.
-
Move the Pinniped CLI binary to your preferred directory and add the executable bit, e.g.
chmod +x /usr/local/bin/pinniped
. -
Generate a kubeconfig for the current cluster. Use
--token
to include a token which should allow you to authenticate as the user that you created above.pinniped get-kubeconfig --token "pinny-the-seal:password123" > /tmp/pinniped-kubeconfig
Note that the above command will print a warning to the screen. You can ignore this warning. Pinniped tries to auto-discover the URL for the Kubernetes API server, but it is not able to do so on kind clusters. The warning is just letting you know that the Pinniped CLI decided to ignore the auto-discovery URL and instead use the URL from your existing kubeconfig.
-
Try using the generated kubeconfig to issue arbitrary
kubectl
commands as thepinny-the-seal
user.kubectl --kubeconfig /tmp/pinniped-kubeconfig get pods -n pinniped
Because this user has no RBAC permissions on this cluster, the previous command results in the error
Error from server (Forbidden): pods is forbidden: User "pinny-the-seal" cannot list resource "pods" in API group "" in the namespace "pinniped"
. However, this does prove that you are authenticated and acting as the "pinny-the-seal" user. -
Create RBAC rules for the test user to give them permissions to perform actions on the cluster. For example, grant the test user permission to view all cluster resources.
kubectl create clusterrolebinding pinny-can-read --clusterrole view --user pinny-the-seal
-
Use the generated kubeconfig to issue arbitrary
kubectl
commands as thepinny-the-seal
user.kubectl --kubeconfig /tmp/pinniped-kubeconfig get pods -n pinniped
The user has permission to list pods, so the command succeeds! 🎉