commit
abc3df8df9
@ -6,7 +6,7 @@ cascade:
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Concierge JWT Authentication
|
||||
weight: 25
|
||||
weight: 30
|
||||
parent: howtos
|
||||
---
|
||||
The Concierge can validate [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519), which are commonly issued by [OpenID Connect (OIDC)](https://openid.net/connect/) identity providers.
|
||||
@ -139,4 +139,4 @@ You should see:
|
||||
If your provider only supports non-public clients, consider using the Pinniped Supervisor.
|
||||
|
||||
- In general, it is not safe to use the same OIDC client across multiple clusters.
|
||||
If you need to access multiple clusters, please [install the Pinniped Supervisor]({{< ref "install-supervisor" >}}).
|
||||
If you need to access multiple clusters, please [install the Pinniped Supervisor]({{< ref "install-supervisor" >}}).
|
||||
|
125
site/content/docs/howto/configure-concierge-supervisor-jwt.md
Normal file
125
site/content/docs/howto/configure-concierge-supervisor-jwt.md
Normal file
@ -0,0 +1,125 @@
|
||||
---
|
||||
title: Configure the Pinniped Concierge to validate JWT tokens issued by the Pinniped Supervisor
|
||||
description: Set up JSON Web Token (JWT) based token authentication on an individual Kubernetes cluster using the Pinniped Supervisor as the OIDC Provider.
|
||||
cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Concierge JWT Authentication with the Supervisor
|
||||
weight: 50
|
||||
parent: howtos
|
||||
---
|
||||
The Concierge can validate [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519), which are commonly issued by [OpenID Connect (OIDC)](https://openid.net/connect/) identity providers.
|
||||
|
||||
This guide shows you how to use this capability in conjunction with the Pinniped Supervisor.
|
||||
Each FederationDomain defined in a Pinniped Supervisor acts as an OIDC issuer.
|
||||
By installing the Pinniped Concierge on multiple Kubernetes clusters,
|
||||
and by configuring each cluster's Concierge as described below
|
||||
to trust JWT tokens from a single Supervisor's FederationDomain,
|
||||
your clusters' users may safely use their identity across all of those clusters.
|
||||
Users of these clusters will enjoy a unified, once-a-day login experience for all the clusters with their `kubectl` CLI.
|
||||
|
||||
If you would rather not use the Supervisor, you may want to [configure the Concierge to validate JWT tokens from other OIDC providers]({{< ref "configure-concierge-jwt" >}}) instead.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This how-to guide assumes that you have already [installed the Pinniped Supervisor]({{< ref "install-supervisor" >}}) with working ingress,
|
||||
and that you have [configured a FederationDomain to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
|
||||
It also assumes that you have already [installed the Pinniped Concierge]({{< ref "install-concierge" >}})
|
||||
on all the clusters in which you would like to allow users to have a unified identity.
|
||||
|
||||
## Create a JWTAuthenticator
|
||||
|
||||
Create a JWTAuthenticator describing how to validate tokens from your Supervisor's FederationDomain:
|
||||
|
||||
```yaml
|
||||
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
|
||||
kind: JWTAuthenticator
|
||||
metadata:
|
||||
name: my-supervisor-authenticator
|
||||
spec:
|
||||
|
||||
# The value of the `issuer` field should exactly match the `issuer`
|
||||
# field of your Supervisor's FederationDomain.
|
||||
issuer: https://my-issuer.example.com/any/path
|
||||
|
||||
# You can use any `audience` identifier for your cluster, but it is
|
||||
# important that it is unique for security reasons.
|
||||
audience: my-unique-cluster-identifier-da79fa849
|
||||
|
||||
# If the TLS certificate of your FederationDomain is not signed by
|
||||
# a standard CA trusted by the Concierge pods by default, then
|
||||
# specify its CA here as a base64-encoded PEM.
|
||||
tls:
|
||||
certificateAuthorityData: LS0tLS1CRUdJTiBDRVJUSUZJQ0...0tLQo=
|
||||
```
|
||||
|
||||
If you've saved this into a file `my-supervisor-authenticator.yaml`, then install it into your cluster using:
|
||||
|
||||
```sh
|
||||
kubectl apply -f my-supervisor-authenticator.yaml
|
||||
```
|
||||
|
||||
Do this on each cluster in which you would like to allow users from that FederationDomain to log in.
|
||||
Don't forget to give each cluster a unique `audience` value for security reasons.
|
||||
|
||||
## Generate a kubeconfig file
|
||||
|
||||
Generate a kubeconfig file for one of the clusters in which you installed and configured the Concierge as described above:
|
||||
|
||||
```sh
|
||||
pinniped get kubeconfig > my-cluster.yaml
|
||||
```
|
||||
|
||||
This assumes that your current kubeconfig is an admin-level kubeconfig for the cluster, such as the kubeconfig
|
||||
that you used to install the Concierge.
|
||||
|
||||
This creates a kubeconfig YAML file `my-cluster.yaml`, unique to that cluster, which targets your JWTAuthenticator
|
||||
using `pinniped login oidc` as an [ExecCredential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins).
|
||||
This new kubeconfig can be shared with the other users of this cluster. It does not contain any specific
|
||||
identity or credentials. When a user uses this new kubeconfig with `kubectl`, the Pinniped plugin will
|
||||
prompt them to log in using their own identity.
|
||||
|
||||
## Use the kubeconfig file
|
||||
|
||||
Use the kubeconfig with `kubectl` to access your cluster:
|
||||
|
||||
```sh
|
||||
kubectl --kubeconfig my-cluster.yaml get namespaces
|
||||
```
|
||||
|
||||
You should see:
|
||||
|
||||
- The `pinniped login oidc` command is executed automatically by `kubectl`.
|
||||
|
||||
- Pinniped directs you to login with whatever identity provider is configured in the Supervisor, either by opening
|
||||
your browser (for upstream OIDC Providers) or by prompting for your username and password (for upstream LDAP providers).
|
||||
|
||||
- In your shell, you see your clusters namespaces.
|
||||
|
||||
If instead you get an access denied error, you may need to create a ClusterRoleBinding for username of your account
|
||||
in the Supervisor's upstream identity provider, for example:
|
||||
|
||||
```sh
|
||||
kubectl create clusterrolebinding my-user-admin \
|
||||
--clusterrole admin \
|
||||
--user my-username@example.com
|
||||
```
|
||||
|
||||
Alternatively, you could create role bindings based on the group membership of your users
|
||||
in the upstream identity provider, for example:
|
||||
|
||||
```sh
|
||||
kubectl create clusterrolebinding my-auditors \
|
||||
--clusterrole view \
|
||||
--group auditors
|
||||
```
|
||||
|
||||
## Other notes
|
||||
|
||||
- Pinniped kubeconfig files do not contain secrets and are safe to share between users.
|
||||
|
||||
- Temporary session credentials such as ID, access, and refresh tokens are stored in:
|
||||
- `~/.config/pinniped/sessions.yaml` (macOS/Linux)
|
||||
- `%USERPROFILE%/.config/pinniped/sessions.yaml` (Windows).
|
@ -6,7 +6,7 @@ cascade:
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Concierge Webhook Authentication
|
||||
weight: 26
|
||||
weight: 40
|
||||
parent: howtos
|
||||
---
|
||||
|
||||
|
@ -5,11 +5,12 @@ cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Supervisor With GitLab
|
||||
weight: 35
|
||||
name: Configure Supervisor With GitLab OIDC
|
||||
weight: 90
|
||||
parent: howtos
|
||||
---
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single "upstream" OIDC identity provider to many "downstream" cluster clients.
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single
|
||||
"upstream" identity provider to many "downstream" cluster clients.
|
||||
|
||||
This guide shows you how to configure the Supervisor so that users can authenticate to their Kubernetes
|
||||
cluster using their GitLab credentials.
|
||||
@ -17,7 +18,7 @@ cluster using their GitLab credentials.
|
||||
## Prerequisites
|
||||
|
||||
This how-to guide assumes that you have already [installed the Pinniped Supervisor]({{< ref "install-supervisor" >}}) with working ingress,
|
||||
and that you have [configured a `FederationDomain` to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
and that you have [configured a FederationDomain to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
|
||||
## Configure your GitLab Application
|
||||
|
||||
@ -137,4 +138,4 @@ spec:
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have configured the Supervisor to use GitLab, you may want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-jwt" >}}).
|
||||
Now that you have configured the Supervisor to use GitLab, you will want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-supervisor-jwt" >}}).
|
||||
|
@ -0,0 +1,158 @@
|
||||
---
|
||||
title: Configure the Pinniped Supervisor to use JumpCloud as an LDAP Provider
|
||||
description: Set up the Pinniped Supervisor to use JumpCloud LDAP
|
||||
cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Supervisor With JumpCloud LDAP
|
||||
weight: 110
|
||||
parent: howtos
|
||||
---
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single
|
||||
"upstream" identity provider to many "downstream" cluster clients.
|
||||
|
||||
[JumpCloud](https://jumpcloud.com) is a cloud-based service which bills itself as
|
||||
"a comprehensive and flexible cloud directory platform". It includes the capability to act as an LDAP identity provider.
|
||||
|
||||
This guide shows you how to configure the Supervisor so that users can authenticate to their Kubernetes
|
||||
cluster using their identity from JumpCloud's LDAP service.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This how-to guide assumes that you have already [installed the Pinniped Supervisor]({{< ref "install-supervisor" >}}) with working ingress,
|
||||
and that you have [configured a FederationDomain to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
|
||||
## Configure Your JumpCloud Account
|
||||
If you don't already have a JumpCloud account, you can create one for free with up to 10 users in the account.
|
||||
|
||||
You will need to create two types of users in your JumpCloud account using the JumpCloud console UI:
|
||||
|
||||
1. Users who can use `kubectl` to authenticate into the cluster
|
||||
|
||||
You may want to specify passwords for these users at the time of creation, unless you prefer to use JumpCloud's email invitation feature.
|
||||
Make sure these users are part of the LDAP Directory in which the LDAP searches will occur by checking the option
|
||||
to add the directory for the user in the JumpCloud console under the User->Directory tab.
|
||||
|
||||
2. An LDAP service account to be used by the Pinniped Supervisor to perform LDAP searches and binds
|
||||
|
||||
Specify a password for this user at the time of creation.
|
||||
Also click the "Enable as LDAP Bind DN" option for this user.
|
||||
|
||||
Here are some good resources to review while setting up and using JumpCloud's LDAP service:
|
||||
1. [Using JumpCloud's LDAP-as-a-Service](https://support.jumpcloud.com/support/s/article/using-jumpclouds-ldap-as-a-service1)
|
||||
2. [Filtering by User or Group in LDAP](https://support.jumpcloud.com/support/s/article/filtering-by-user-or-group-in-ldap-search-filters1?topicId=0TO1M000000EUx3WAG&topicName=LDAP-as-a-Service)
|
||||
|
||||
## Configure the Supervisor cluster
|
||||
|
||||
Create an [LDAPIdentityProvider](https://github.com/vmware-tanzu/pinniped/blob/main/generated/1.20/README.adoc#ldapidentityprovider) in the same namespace as the Supervisor.
|
||||
|
||||
For example, this LDAPIdentityProvider configures the LDAP entry's `uid` as the Kubernetes username,
|
||||
and the `cn` (common name) of each group to which the user belongs as the Kubernetes group names.
|
||||
|
||||
```yaml
|
||||
apiVersion: idp.supervisor.pinniped.dev/v1alpha1
|
||||
kind: LDAPIdentityProvider
|
||||
metadata:
|
||||
name: jumpcloudldap
|
||||
namespace: pinniped-supervisor
|
||||
spec:
|
||||
|
||||
# Specify the host of the LDAP server.
|
||||
host: "ldap.jumpcloud.com:636"
|
||||
|
||||
# Specify how to search for the username when an end-user tries to log in
|
||||
# using their username and password.
|
||||
userSearch:
|
||||
|
||||
# Specify the root of the user search.
|
||||
# You can get YOUR_ORG_ID from:
|
||||
# https://console.jumpcloud.com LDAP->Name->Details section.
|
||||
base: "ou=Users,o=YOUR_ORG_ID,dc=jumpcloud,dc=com"
|
||||
|
||||
# Specify how to filter the search to find the specific user by username.
|
||||
# "{}" will be replaced # by the username that the end-user had typed
|
||||
# when they tried to log in.
|
||||
filter: "&(objectClass=inetOrgPerson)(uid={})"
|
||||
|
||||
# Specify which fields from the user entry should be used upon
|
||||
# successful login.
|
||||
attributes:
|
||||
|
||||
# Specifies the name of the attribute in the LDAP entry whose
|
||||
# value shall become the username of the user after a successful
|
||||
# authentication.
|
||||
username: "uid"
|
||||
|
||||
# Specifies the name of the attribute in the LDAP entry whose
|
||||
# value shall be used to uniquely identify the user within this
|
||||
# LDAP provider after a successful authentication.
|
||||
uid: "uidNumber"
|
||||
|
||||
# Specify how to search for the group membership of an end-user during login.
|
||||
groupSearch:
|
||||
|
||||
# Specify the root of the group search. This may be a different subtree of
|
||||
# the LDAP database compared to the user search, but in this case users
|
||||
# and groups are mixed together in the LDAP database.
|
||||
# You can get YOUR_ORG_ID from:
|
||||
# https://console.jumpcloud.com LDAP->Name->Details section.
|
||||
base: "ou=Users,o=YOUR_ORG_ID,dc=jumpcloud,dc=com"
|
||||
|
||||
# Specify the search filter which should be applied when searching for
|
||||
# groups for a user. "{}" will be replaced by the dn (distinguished
|
||||
# name) of the user entry found as a result of the user search.
|
||||
filter: "&(objectClass=groupOfNames)(member={})"
|
||||
|
||||
# Specify which fields from each group entry should be used upon
|
||||
# successful login.
|
||||
attributes:
|
||||
|
||||
# Specify the name of the attribute in the LDAP entries whose value
|
||||
# shall become a group name in the user’s list of groups after a
|
||||
# successful authentication.
|
||||
groupName: "cn"
|
||||
|
||||
# Specify the name of the Kubernetes Secret that contains your JumpCloud
|
||||
# bind account credentials. This service account will be used by the
|
||||
# Supervisor to perform user and group searches on the LDAP server.
|
||||
bind:
|
||||
secretName: "jumpcloudldap-bind-account"
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: jumpcloudldap-bind-account
|
||||
namespace: pinniped-supervisor
|
||||
type: kubernetes.io/basic-auth
|
||||
stringData:
|
||||
|
||||
# The dn (distinguished name) of your JumpCloud bind account.
|
||||
# This dn can be found in the
|
||||
# https://console.jumpcloud.com Users->Details section.
|
||||
username: "uid=YOUR_USERNAME,ou=Users,o=YOUR_ORG_ID,dc=jumpcloud,dc=com"
|
||||
|
||||
# The password of your JumpCloud bind account.
|
||||
password: "YOUR_PASSWORD"
|
||||
```
|
||||
|
||||
If you've saved this into a file `jumpcloud.yaml`, then install it into your cluster using:
|
||||
|
||||
```sh
|
||||
kubectl apply -f jumpcloud.yaml
|
||||
```
|
||||
|
||||
Once your LDAPIdentityProvider has been created, you can validate your configuration by running:
|
||||
|
||||
```sh
|
||||
kubectl describe LDAPIdentityProvider -n pinniped-supervisor jumpcloudldap
|
||||
```
|
||||
|
||||
Look at the `status` field. If it was configured correctly, you should see `phase: Ready`.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have configured the Supervisor to use JumpCloud LDAP, you will want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-supervisor-jwt" >}}).
|
||||
Then you'll be able to log into those clusters as any of the users from the JumpCloud directory.
|
@ -5,11 +5,12 @@ cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Supervisor With Okta
|
||||
weight: 35
|
||||
name: Configure Supervisor With Okta OIDC
|
||||
weight: 80
|
||||
parent: howtos
|
||||
---
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single "upstream" OIDC identity provider to many "downstream" cluster clients.
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single
|
||||
"upstream" identity provider to many "downstream" cluster clients.
|
||||
|
||||
This guide shows you how to configure the Supervisor so that users can authenticate to their Kubernetes
|
||||
cluster using their Okta credentials.
|
||||
@ -17,7 +18,7 @@ cluster using their Okta credentials.
|
||||
## Prerequisites
|
||||
|
||||
This how-to guide assumes that you have already [installed the Pinniped Supervisor]({{< ref "install-supervisor" >}}) with working ingress,
|
||||
and that you have [configured a `FederationDomain` to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
and that you have [configured a FederationDomain to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
|
||||
## Create an Okta Application
|
||||
|
||||
@ -107,4 +108,4 @@ Look at the `status` field. If it was configured correctly, you should see `phas
|
||||
|
||||
## Next steps
|
||||
|
||||
Now that you have configured the Supervisor to use Okta, you may want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-jwt" >}}).
|
||||
Now that you have configured the Supervisor to use Okta, you will want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-supervisor-jwt" >}}).
|
||||
|
298
site/content/docs/howto/configure-supervisor-with-openldap.md
Normal file
298
site/content/docs/howto/configure-supervisor-with-openldap.md
Normal file
@ -0,0 +1,298 @@
|
||||
---
|
||||
title: Configure the Pinniped Supervisor to use OpenLDAP as an LDAP Provider
|
||||
description: Set up the Pinniped Supervisor to use OpenLDAP login.
|
||||
cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Supervisor With OpenLDAP
|
||||
weight: 100
|
||||
parent: howtos
|
||||
---
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single
|
||||
"upstream" identity provider to many "downstream" cluster clients.
|
||||
|
||||
[OpenLDAP](https://www.openldap.org) is a popular open source LDAP server for Linux/UNIX.
|
||||
|
||||
This guide shows you how to configure the Supervisor so that users can authenticate to their Kubernetes
|
||||
cluster using their identity from an OpenLDAP server.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This how-to guide assumes that you have already [installed the Pinniped Supervisor]({{< ref "install-supervisor" >}}) with working ingress,
|
||||
and that you have [configured a FederationDomain to issue tokens for your downstream clusters]({{< ref "configure-supervisor" >}}).
|
||||
|
||||
## An Example of Deploying OpenLDAP on Kubernetes
|
||||
|
||||
*Note: If you already have an OpenLDAP server installed and configured, please skip to the next section to configure the Supervisor.*
|
||||
|
||||
There are many ways to configure and deploy OpenLDAP. In this section we document a simple way to stand up an OpenLDAP
|
||||
server in a way that would only be appropriate for a demo or testing environment.
|
||||
**Following the steps below to deploy and configure OpenLDAP is not appropriate for production use.**
|
||||
If you are interested in using OpenLDAP in a production setting, there are many other configuration and deployment
|
||||
guides available elsewhere online which would be more appropriate.
|
||||
|
||||
We will use [Bitnami's OpenLDAP container image](https://www.openldap.org) deployed on Kubernetes as a Deployment
|
||||
in the same cluster as the Supervisor. We will enable TLS and create some test user accounts on the OpenLDAP server.
|
||||
|
||||
First we'll need to create TLS serving certs for our OpenLDAP server. In this example, we'll use the `cfssl` CLI tool,
|
||||
but they could also be created with other tools (e.g. `openssl` or `step`).
|
||||
|
||||
```sh
|
||||
cfssl print-defaults config > /tmp/cfssl-default.json
|
||||
|
||||
echo '{"CN": "Pinniped Test","hosts": [],"key": {"algo": "ecdsa","size": 256},"names": [{}]}' > /tmp/csr.json
|
||||
|
||||
cfssl genkey \
|
||||
-config /tmp/cfssl-default.json \
|
||||
-initca /tmp/csr.json \
|
||||
| cfssljson -bare ca
|
||||
|
||||
cfssl gencert \
|
||||
-ca ca.pem -ca-key ca-key.pem \
|
||||
-config /tmp/cfssl-default.json \
|
||||
-profile www \
|
||||
-cn "ldap.openldap.svc.cluster.local" \
|
||||
-hostname "ldap.openldap.svc.cluster.local" \
|
||||
/tmp/csr.json \
|
||||
| cfssljson -bare ldap
|
||||
```
|
||||
|
||||
The above commands will create the following files in your current working directory:
|
||||
`ca-key.pem`, `ca.csr`, `ca.pem`, `ldap-key.pem`, `ldap.csr`, and `ldap.pem`.
|
||||
|
||||
Next, create a namespace for the OpenLDAP deployment.
|
||||
|
||||
```sh
|
||||
kubectl create namespace openldap
|
||||
```
|
||||
|
||||
Next, load some of those certificate files into a Kubernetes Secret in the new namespace,
|
||||
so they can be available to the Deployment in the following step.
|
||||
|
||||
```sh
|
||||
kubectl create secret generic -n openldap certs \
|
||||
--from-file=ldap.pem --from-file=ldap-key.pem --from-file=ca.pem
|
||||
```
|
||||
|
||||
Finally, create this Deployment for the OpenLDAP server. Also create a Service to expose the OpenLDAP
|
||||
server within the cluster on the service network so the Supervisor can connect to it.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ldap
|
||||
namespace: openldap
|
||||
labels:
|
||||
app: ldap
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ldap
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ldap
|
||||
spec:
|
||||
containers:
|
||||
- name: ldap
|
||||
image: docker.io/bitnami/openldap
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- name: ldap
|
||||
containerPort: 1389
|
||||
- name: ldaps
|
||||
containerPort: 1636
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "64Mi"
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: ldap
|
||||
initialDelaySeconds: 2
|
||||
timeoutSeconds: 90
|
||||
periodSeconds: 2
|
||||
failureThreshold: 9
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: "true"
|
||||
- name: LDAP_ADMIN_USERNAME
|
||||
value: "admin"
|
||||
- name: LDAP_ADMIN_PASSWORD
|
||||
# Rather than hard-coding passwords, please consider
|
||||
# using a Secret with a random password!
|
||||
# We are hard-coding the password to keep this example
|
||||
# as simple as possible.
|
||||
value: "admin123"
|
||||
- name: LDAP_ROOT
|
||||
value: "dc=pinniped,dc=dev"
|
||||
- name: LDAP_USER_DC
|
||||
value: "users"
|
||||
- name: LDAP_USERS
|
||||
value: "pinny,wally"
|
||||
- name: LDAP_PASSWORDS
|
||||
# Rather than hard-coding passwords, please consider
|
||||
# using a Secret with random passwords!
|
||||
# We are hard-coding the passwords to keep this example
|
||||
# as simple as possible.
|
||||
value: "pinny123,wally123"
|
||||
- name: LDAP_GROUP
|
||||
value: "users"
|
||||
- name: LDAP_ENABLE_TLS
|
||||
value: "yes"
|
||||
- name: LDAP_TLS_CERT_FILE
|
||||
value: "/var/certs/ldap.pem"
|
||||
- name: LDAP_TLS_KEY_FILE
|
||||
value: "/var/certs/ldap-key.pem"
|
||||
- name: LDAP_TLS_CA_FILE
|
||||
value: "/var/certs/ca.pem"
|
||||
volumeMounts:
|
||||
- name: certs
|
||||
mountPath: /var/certs
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: certs
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ldap
|
||||
namespace: openldap
|
||||
labels:
|
||||
app: ldap
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: ldap
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 636
|
||||
targetPort: 1636
|
||||
name: ldaps
|
||||
```
|
||||
|
||||
If you've saved this into a file `openldap.yaml`, then install it into your cluster using:
|
||||
|
||||
```sh
|
||||
kubectl apply -f openldap.yaml
|
||||
```
|
||||
|
||||
## Configure the Supervisor cluster
|
||||
|
||||
Create an [LDAPIdentityProvider](https://github.com/vmware-tanzu/pinniped/blob/main/generated/1.20/README.adoc#ldapidentityprovider) in the same namespace as the Supervisor.
|
||||
|
||||
For example, this LDAPIdentityProvider configures the LDAP entry's `uid` as the Kubernetes username,
|
||||
and the `cn` (common name) of each group to which the user belongs as the Kubernetes group names.
|
||||
|
||||
The specific values in this example are appropriate for the OpenLDAP server deployed by the previous section's steps,
|
||||
but the values could be customized for your pre-existing LDAP server if you skipped the previous section.
|
||||
We'll use the CA created in the steps above to trust the TLS certificates of the OpenLDAP server.
|
||||
|
||||
```sh
|
||||
cat <<EOF | kubectl apply -n pinniped-supervisor -f -
|
||||
apiVersion: idp.supervisor.pinniped.dev/v1alpha1
|
||||
kind: LDAPIdentityProvider
|
||||
metadata:
|
||||
name: openldap
|
||||
spec:
|
||||
|
||||
# Specify the host of the LDAP server.
|
||||
host: "ldap.openldap.svc.cluster.local"
|
||||
|
||||
# Specify the CA certificate of the LDAP server as a
|
||||
# base64-encoded PEM bundle.
|
||||
tls:
|
||||
certificateAuthorityData: $(cat ca.pem | base64)
|
||||
|
||||
# Specify how to search for the username when an end-user tries to log in
|
||||
# using their username and password.
|
||||
userSearch:
|
||||
|
||||
# Specify the root of the user search.
|
||||
base: "ou=users,dc=pinniped,dc=dev"
|
||||
|
||||
# Specify how to filter the search to find the specific user by username.
|
||||
# "{}" will be replaced # by the username that the end-user had typed
|
||||
# when they tried to log in.
|
||||
filter: "&(objectClass=inetOrgPerson)(uid={})"
|
||||
|
||||
# Specify which fields from the user entry should be used upon
|
||||
# successful login.
|
||||
attributes:
|
||||
|
||||
# Specifies the name of the attribute in the LDAP entry whose
|
||||
# value shall become the username of the user after a successful
|
||||
# authentication.
|
||||
username: "uid"
|
||||
|
||||
# Specifies the name of the attribute in the LDAP entry whose
|
||||
# value shall be used to uniquely identify the user within this
|
||||
# LDAP provider after a successful authentication.
|
||||
uid: "uidNumber"
|
||||
|
||||
# Specify how to search for the group membership of an end-user during login.
|
||||
groupSearch:
|
||||
|
||||
# Specify the root of the group search. This may be a different subtree of
|
||||
# the LDAP database compared to the user search, but in this case users
|
||||
# and groups are mixed together in the LDAP database.
|
||||
base: "ou=users,dc=pinniped,dc=dev"
|
||||
|
||||
# Specify the search filter which should be applied when searching for
|
||||
# groups for a user. "{}" will be replaced by the dn (distinguished
|
||||
# name) of the user entry found as a result of the user search.
|
||||
filter: "&(objectClass=groupOfNames)(member={})"
|
||||
|
||||
# Specify which fields from each group entry should be used upon
|
||||
# successful login.
|
||||
attributes:
|
||||
|
||||
# Specify the name of the attribute in the LDAP entries whose value
|
||||
# shall become a group name in the user’s list of groups after a
|
||||
# successful authentication.
|
||||
groupName: "cn"
|
||||
|
||||
# Specify the name of the Kubernetes Secret that contains your OpenLDAP
|
||||
# bind account credentials. This service account will be used by the
|
||||
# Supervisor to perform user and group searches on the LDAP server.
|
||||
bind:
|
||||
secretName: openldap-bind-account
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: openldap-bind-account
|
||||
type: kubernetes.io/basic-auth
|
||||
stringData:
|
||||
|
||||
# The dn (distinguished name) of your OpenLDAP bind account. To keep
|
||||
# this example simple, we will use the OpenLDAP server's admin account
|
||||
# credentials, but best practice would be for this account to be a
|
||||
# read-only account with least privileges!
|
||||
username: "cn=admin,dc=pinniped,dc=dev"
|
||||
|
||||
# The password of your OpenLDAP bind account.
|
||||
password: "admin123"
|
||||
EOF
|
||||
```
|
||||
|
||||
Once your LDAPIdentityProvider has been created, you can validate your configuration by running:
|
||||
|
||||
```sh
|
||||
kubectl describe LDAPIdentityProvider -n pinniped-supervisor openldap
|
||||
```
|
||||
|
||||
Look at the `status` field. If it was configured correctly, you should see `phase: Ready`.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have configured the Supervisor to use OpenLDAP, you will want to [configure the Concierge to validate JWTs issued by the Supervisor]({{< ref "configure-concierge-supervisor-jwt" >}}).
|
||||
Then you'll be able to log into those clusters as any of the users from the OpenLDAP directory.
|
@ -5,11 +5,12 @@ cascade:
|
||||
layout: docs
|
||||
menu:
|
||||
docs:
|
||||
name: Configure Supervisor
|
||||
weight: 35
|
||||
name: Configure Supervisor as an OIDC Issuer
|
||||
weight: 70
|
||||
parent: howtos
|
||||
---
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single "upstream" OIDC identity provider to many "downstream" cluster clients.
|
||||
The Supervisor is an [OpenID Connect (OIDC)](https://openid.net/connect/) issuer that supports connecting a single
|
||||
"upstream" identity provider to many "downstream" cluster clients.
|
||||
|
||||
This guide show you how to use this capability to issue [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519) that can be validated by the [Pinniped Concierge]({{< ref "configure-concierge-jwt" >}}).
|
||||
|
||||
@ -109,7 +110,7 @@ spec:
|
||||
|
||||
### Configuring the Supervisor to act as an OIDC provider
|
||||
|
||||
The Supervisor can be configured as an OIDC provider by creating `FederationDomain` resources
|
||||
The Supervisor can be configured as an OIDC provider by creating FederationDomain resources
|
||||
in the same namespace where the Supervisor app was installed. For example:
|
||||
|
||||
```yaml
|
||||
@ -130,6 +131,9 @@ spec:
|
||||
secretName: my-tls-cert-secret
|
||||
```
|
||||
|
||||
You can create multiple FederationDomains as long as each has a unique issuer string.
|
||||
Each FederationDomain can be used to provide access to a set of Kubernetes clusters for a set of user identities.
|
||||
|
||||
#### Configuring TLS for the Supervisor OIDC endpoints
|
||||
|
||||
If you have terminated TLS outside the app, for example using an Ingress with TLS certificates, then you do not need to
|
||||
|
@ -6,7 +6,7 @@ cascade:
|
||||
menu:
|
||||
docs:
|
||||
name: Install Supervisor
|
||||
weight: 30
|
||||
weight: 60
|
||||
parent: howtos
|
||||
---
|
||||
This guide shows you how to install the Pinniped Supervisor, which allows seamless login across one or many Kubernetes clusters.
|
||||
@ -26,7 +26,7 @@ You should have a supported Kubernetes cluster with working HTTPS ingress capabi
|
||||
1. Install the Supervisor into the `pinniped-supervisor` namespace with default options:
|
||||
|
||||
- `kubectl apply -f https://get.pinniped.dev/v0.8.0/install-pinniped-supervisor.yaml`
|
||||
|
||||
|
||||
*Replace v0.8.0 with your preferred version number.*
|
||||
|
||||
## With custom options
|
||||
@ -57,3 +57,7 @@ Pinniped uses [ytt](https://carvel.dev/ytt/) from [Carvel](https://carvel.dev/)
|
||||
- *If you're using [`kapp` from Carvel](https://carvel.dev/kapp/):*
|
||||
|
||||
`ytt --file . | kapp deploy --yes --app pinniped-supervisor --diff-changes --file -`
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have installed the Supervisor, you will want to [configure the Supervisor]({{< ref "configure-supervisor" >}}).
|
||||
|
150
site/content/posts/2021-05-31-first-ldap-release.md
Normal file
150
site/content/posts/2021-05-31-first-ldap-release.md
Normal file
@ -0,0 +1,150 @@
|
||||
---
|
||||
title: "Pinniped v0.9.0: Bring Your LDAP Identities to Your Kubernetes Clusters"
|
||||
slug: bringing-ldap-identities-to-clusters
|
||||
date: 2021-05-31
|
||||
author: Ryan Richard
|
||||
image: https://cdn.pixabay.com/photo/2018/08/05/15/06/seal-3585727_1280.jpg
|
||||
excerpt: "With the release of v0.9.0, Pinniped now supports using LDAP identities to log in to Kubernetes clusters."
|
||||
tags: ['Ryan Richard', 'release']
|
||||
---
|
||||
|
||||
![seal swimming](https://cdn.pixabay.com/photo/2018/08/05/15/06/seal-3585727_1280.jpg)
|
||||
*Photo from [matos11 on Pixabay](https://pixabay.com/photos/seal-animal-water-hairy-3585727/)*
|
||||
|
||||
Pinniped is a “batteries included” authentication system for Kubernetes clusters.
|
||||
With the release of v0.9.0, Pinniped now supports using LDAP identities to log in to Kubernetes clusters.
|
||||
|
||||
This post describes how v0.9.0 fits into Pinniped’s quest to bring a smooth, unified login experience to all Kubernetes clusters.
|
||||
|
||||
## Support for LDAP Identities in the Pinniped Supervisor
|
||||
|
||||
Pinniped is made up of three main components:
|
||||
- The Pinniped [_Concierge_]({{< ref "docs/howto/install-concierge.md" >}}) component implements cluster-level authentication.
|
||||
- The Pinniped [_Supervisor_]({{< ref "docs/howto/install-supervisor.md" >}}) component implements authentication federation
|
||||
across lots of clusters, which each run the Concierge, and makes it easy to bring your own identities using any OIDC or LDAP provider.
|
||||
- The `pinniped` [_CLI_]({{< ref "docs/howto/install-cli.md" >}}) acts as an authentication plugin to `kubectl`.
|
||||
|
||||
The new LDAP support lives in the Supervisor component, along with enhancements to the CLI.
|
||||
|
||||
### Why LDAP? And why now?
|
||||
|
||||
From the start, the Pinniped Supervisor has supported getting your identities from OIDC Providers. This was a strategic
|
||||
decision for the project, and was made for three reasons:
|
||||
|
||||
1. OIDC is an established standard with good security properties
|
||||
2. Many modern identity systems commonly used by enterprises implement OIDC, making it immediately useful for many Pinniped users
|
||||
3. Other open source projects, such as [Dex](https://dexidp.io) and [UAA](https://github.com/cloudfoundry/uaa),
|
||||
can act as a shim between OIDC and many other identity systems, and can provide a bridge between Pinniped and LDAP
|
||||
|
||||
This strategy has served us well for the initial launch of Pinniped to make it maximally useful for a minimal amount of code.
|
||||
|
||||
Although LDAP is a legacy identity protocol, and it is likely that nobody loves LDAP, the reality seems to be that a lot of enterprises keep using it anyway.
|
||||
Luckily, these other technologies could bridge LDAP into earlier versions of Pinniped for us.
|
||||
|
||||
At this point you may be asking yourself: since other systems can be used as a shim between Pinniped and an LDAP provider,
|
||||
then why would Pinniped ever need to provide direct support for LDAP providers? Good question. One of our goals is to make Kubernetes
|
||||
authentication as flexible and easy to use as possible. While some of the available identity shims are feature-rich technologies, they
|
||||
are not necessarily easy to configure. Also, their deployment, initial configuration, and day-two reconfiguration are not necessarily
|
||||
accomplished in a Kubernetes-native style using K8s APIs.
|
||||
|
||||
We felt it was worth the effort of building native LDAP support in order to reduce the number of moving parts in your
|
||||
authentication system and to simplify the configuration of integrating your LDAP identity providers with Pinniped.
|
||||
Although we contemplated including this feature from the beginning, we waited until we had other higher priority
|
||||
features in place before prioritizing this effort.
|
||||
|
||||
### What about Active Directory's LDAP?
|
||||
|
||||
This release includes support for generic LDAP providers. When configured correctly for your provider,
|
||||
it should work with any LDAP provider.
|
||||
|
||||
We recognize that legacy Active Directory systems are probably one of the most popular LDAP providers.
|
||||
|
||||
However, for this first release we have not specifically tested with Microsoft Active Directory.
|
||||
Our generic LDAP implementation should work with Active Directory too.
|
||||
We intend to add features in future releases to make it more convenient to integrate with Microsoft Active Directory
|
||||
as an LDAP provider, and to include AD in our automated testing suite. Stay tuned.
|
||||
|
||||
In the meantime, please let us know if you run into any issues or concerns using your LDAP system.
|
||||
Feel free to ask questions via [#pinniped](https://kubernetes.slack.com/archives/C01BW364RJA) on Kubernetes Slack,
|
||||
or [create an issue](https://github.com/vmware-tanzu/pinniped/issues/new/choose) on our Github repository.
|
||||
|
||||
### Security Considerations
|
||||
|
||||
LDAP is inherently less secure than OIDC in one important way. In an OIDC login flow, your account credentials are only
|
||||
handled by your web browser, which you generally trust, and by the OIDC provider itself. The Pinniped CLI and Pinniped
|
||||
server-side components never handle your credentials. Unfortunately, LDAP does not work that way. LDAP authentication
|
||||
requires that the client send the user's password on behalf of the user. This means that the Pinniped CLI and the
|
||||
Pinniped Supervisor both see your LDAP password. If you have the choice between using an OIDC provider or an LDAP
|
||||
provider as your source of identity, then you might want to lean toward the OIDC provider for this reason.
|
||||
|
||||
We've taken care to always use TLS encrypted communication channels
|
||||
between the CLI and the Supervisor and between the Supervisor and the LDAP provider. We've also taken care to never
|
||||
log your password or write it to any storage. The Supervisor is already a privileged component in your chain of trust
|
||||
in the sense that if it were compromised by a bad actor, all of your clusters which are trusting it to provide authentication
|
||||
would therefore also become vulnerable to intrusion. While in an ideal world we would prefer that no components handled
|
||||
your LDAP password, at least the credential is only handled by components which are already assumed to be trusted.
|
||||
|
||||
Other clusters running the Concierge will never see your LDAP password. The Supervisor authenticates your users with
|
||||
the LDAP provider, and then the Supervisor issues unique, short-lived, per-cluster tokens. These are the only credentials
|
||||
transmitted to the clusters running the Concierge for authentication. Each token is only accepted by its target cluster,
|
||||
so a token somehow stolen from one cluster has no value on other clusters. This limits the impact of a compromise on one
|
||||
of those clusters.
|
||||
|
||||
You might notice that we have not implemented an API to configure LDAP as an identity provider directly in the Concierge
|
||||
component, without requiring use of the Supervisor component. We may add this in the future, although it would be less secure
|
||||
for the reasons described above. The reason that we would consider adding it would be for use cases where you are configuring
|
||||
authentication only for one or a very small number of clusters, and you don't feel like incurring the overhead of running
|
||||
a Supervisor such as configuring ingress, TLS certs, and usually a DNS entry. (Interested in having this feature? Reach out and
|
||||
let us know!) Having the Concierge directly talk to the LDAP provider would imply that users would be handing their LDAP
|
||||
passwords directly to the Concierge. If a bad actor were able to compromise that cluster as an admin-level user, then
|
||||
they might interfere with the Concierge software on that cluster to find a way to see your password. Once they have your
|
||||
password they could access other clusters, and even other unrelated systems which are also using LDAP authentication.
|
||||
As a design consideration in Pinniped, we generally consider clusters to be untrustworthy to reduce the impact of a successful
|
||||
attack on a cluster.
|
||||
|
||||
As an aside, this is a good time to remind you that whether you use OIDC or LDAP identity providers, it is important to
|
||||
keep the Supervisor secure. We recommend running the Supervisor on a separate cluster, or a cluster that you use to only run other
|
||||
similar security-sensitive components, which is appropriately secured and accessible to the fewest number of users as possible.
|
||||
It is also important to ensure that your users are installing the authentic versions of the `kubectl` and `pinniped` CLI tools.
|
||||
And it is important that your users are using authentic kubeconfig files handed out by a trusted source.
|
||||
|
||||
### How to use LDAP with your Pinniped Supervisor
|
||||
|
||||
Once you have [installed]({{< ref "docs/howto/install-supervisor.md" >}})
|
||||
and [configured]({{< ref "docs/howto/configure-supervisor.md" >}}) the Supervisor, adding an LDAP provider is as easy as creating
|
||||
an [LDAPIdentityProvider](https://github.com/vmware-tanzu/pinniped/blob/main/generated/1.20/README.adoc#ldapidentityprovider) resource.
|
||||
|
||||
We've provided examples of using [OpenLDAP]({{< ref "docs/howto/install-supervisor.md" >}})
|
||||
and [JumpCloud]({{< ref "docs/howto/install-supervisor.md" >}}) as LDAP providers.
|
||||
Stay tuned for examples of using Active Directory.
|
||||
|
||||
The `pinniped` CLI has also been enhanced to support LDAP authentication. Now when `pinnped get kubectl` sees
|
||||
that your cluster's Concierge is configured to use a Supervisor which has an LDAPIdentityProvider, then it
|
||||
will emit the appropriate kubeconfig to enable LDAP logins. When that kubeconfig is used with `kubectl`,
|
||||
the Pinniped plugin will directly prompt the user on the CLI for their LDAP username and password and
|
||||
securely transmit them to the Supervisor for authentication.
|
||||
|
||||
### What about SAML?
|
||||
|
||||
Now that we support OIDC and LDAP identity providers, the obvious next question is whether we should also support the third
|
||||
big enterprise authentication protocol: SAML.
|
||||
|
||||
We are currently undecided about the value of offering direct support for SAML. The protocol is complex and
|
||||
[difficult to implement without mistakes or vulnerabilities in dependencies](https://github.com/dexidp/dex/discussions/1884).
|
||||
Additionally, SAML seems to be waning in popularity in favor of OIDC, which provides a similar end-user experience.
|
||||
|
||||
What do you think? Do you still use SAML in your enterprise?
|
||||
Do you need SAML for authentication into your Kubernetes clusters? Let us know!
|
||||
|
||||
### We'd love to hear from you!
|
||||
|
||||
We thrive on community feedback. Did you try our new LDAP features?
|
||||
What else do you need from identity systems for your Kubernetes clusters?
|
||||
|
||||
Find us in [#pinniped](https://kubernetes.slack.com/archives/C01BW364RJA) on Kubernetes Slack,
|
||||
[create an issue](https://github.com/vmware-tanzu/pinniped/issues/new/choose) on our Github repository,
|
||||
or start a [Discussion](https://github.com/vmware-tanzu/pinniped/discussions).
|
||||
|
||||
Thanks for reading our announcement!
|
||||
|
||||
{{< community >}}
|
Loading…
Reference in New Issue
Block a user