Merge remote-tracking branch 'upstream/main' into token-endpoint

This commit is contained in:
Andrew Keesler 2020-12-04 08:58:18 -05:00
commit 2dc3ab1840
No known key found for this signature in database
GPG Key ID: 27CE0444346F9413
23 changed files with 49 additions and 849 deletions

View File

@ -8,7 +8,7 @@ Please see the [Code of Conduct](./CODE_OF_CONDUCT.md).
## Project Scope
Learn about the [scope](doc/scope.md) of the project.
Learn about the [scope](https://pinniped.dev/docs/scope/) of the project.
## Meeting with the Maintainers

View File

@ -1,4 +1,4 @@
<img src="doc/img/pinniped_logo.svg" alt="Pinniped Logo" width="100%"/>
<img src="site/content/docs/img/pinniped_logo.svg" alt="Pinniped Logo" width="100%"/>
## Overview
@ -28,13 +28,13 @@ credential for a short-lived, cluster-specific credential. Pinniped supports var
IDP types and implements different integration strategies for various Kubernetes
distributions to make authentication possible.
To learn more, see [doc/architecture.md](doc/architecture.md).
To learn more, see [architecture](https://pinniped.dev/docs/architecture/).
<img src="doc/img/pinniped_architecture.svg" alt="Pinniped Architecture Sketch" width="300px"/>
<img src="site/content/docs/img/pinniped_architecture.svg" alt="Pinniped Architecture Sketch" width="300px"/>
## Trying Pinniped
Care to kick the tires? It's easy to [install and try Pinniped](doc/demo.md).
Care to kick the tires? It's easy to [install and try Pinniped](https://pinniped.dev/docs/demo/).
## Discussion

View File

@ -79,7 +79,7 @@ kubectl get secret local-user-authenticator-tls-serving-certificate --namespace
When installing Pinniped on the same cluster, configure local-user-authenticator as an Identity Provider for Pinniped
using the webhook URL `https://local-user-authenticator.local-user-authenticator.svc/authenticate`
along with the CA bundle fetched by the above command. See [doc/demo.md](../../doc/demo.md) for an example.
along with the CA bundle fetched by the above command. See [demo](https://pinniped.dev/docs/demo/) for an example.
## Optional: Manually Testing the Webhook Endpoint After Installing

View File

@ -1,75 +0,0 @@
# Architecture
The principal purpose of Pinniped is to allow users to access Kubernetes
clusters. Pinniped hopes to enable this access across a wide range of Kubernetes
environments with zero configuration.
This integration is implemented using a credential exchange API which takes as
input a credential from the external IDP and returns a credential which is understood by the host
Kubernetes cluster.
<img src="img/pinniped_architecture.svg" alt="Pinniped Architecture Sketch" width="300px"/>
Pinniped supports various IDP types and implements different integration strategies
for various Kubernetes distributions to make authentication possible.
## Supported Kubernetes Cluster Types
Pinniped supports the following types of Kubernetes clusters:
- Clusters where the Kube Controller Manager pod is accessible from Pinniped's pods.
Support for other types of Kubernetes distributions is coming soon.
## External Identity Provider Integrations
Pinniped will consume identity from one or more external identity providers
(IDPs). Administrators will configure external IDPs via Kubernetes custom
resources allowing Pinniped to be managed using GitOps and standard Kubernetes tools.
Pinniped supports the following external IDP types.
1. Any webhook which implements the
[Kubernetes TokenReview API](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication).
In addition to allowing the integration of any existing IDP which implements this API, webhooks also
serve as an extension point for Pinniped by allowing for integration of arbitrary custom authenticators.
While a custom implementation may be in any language or framework, this project provides a
sample implementation in Golang. See the `ServeHTTP` method of
[cmd/local-user-authenticator/main.go](../cmd/local-user-authenticator/main.go).
More IDP types are coming soon.
## Cluster Integration Strategies
Pinniped will issue a cluster credential by leveraging cluster-specific
functionality. In the near term, cluster integrations will happen via different
cluster-specific flows depending on the type of cluster. In the longer term,
Pinniped hopes to contribute and leverage upstream Kubernetes extension points that
cleanly enable this integration.
Pinniped supports the following cluster integration strategies.
1. Pinniped hosts a credential exchange API endpoint via a Kubernetes aggregated API server.
This API returns a new cluster-specific credential using the cluster's signing keypair to
issue short-lived cluster certificates. (In the future, when the Kubernetes CSR API
provides a way to issue short-lived certificates, then the Pinniped credential exchange API
will use that instead of using the cluster's signing keypair.)
More cluster integration strategies are coming soon, which will allow Pinniped to
support more Kubernetes cluster types.
## `kubectl` Integration
With any of the above IDPs and integration strategies, `kubectl` commands receive the
cluster-specific credential via a
[Kubernetes client-go credential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins).
Users may use the Pinniped CLI as the credential plugin, or they may use any proprietary CLI
built with the [Pinniped Go client library](../generated).
## Example Cluster Authentication Sequence Diagram
This diagram demonstrates using `kubectl get pods` with the Pinniped CLI configured as the credential plugin,
and with a webhook IDP configured as the identity provider for the Pinniped server.
![example-cluster-authentication-sequence-diagram](img/pinniped.svg)

View File

@ -1,198 +0,0 @@
# Trying Pinniped
## Prerequisites
1. A Kubernetes cluster of a type supported by Pinniped as described in [doc/architecture.md](../doc/architecture.md).
Don't have a cluster handy? Consider using [kind](https://kind.sigs.k8s.io/) on your local machine.
See below for an example of using kind.
1. An identity provider of a type supported by Pinniped as described in [doc/architecture.md](../doc/architecture.md).
Don't have an identity provider of a type supported by Pinniped handy? No problem, there is a demo identity provider
available. Start by installing local-user-authenticator on the same cluster where you would like to try Pinniped
by following the directions in [deploy/local-user-authenticator/README.md](../deploy/local-user-authenticator/README.md).
See below for an example of deploying this on kind.
1. A kubeconfig where the current context points to the cluster and has admin-like
privileges on that cluster.
## Overview
Installing and trying Pinniped on any cluster will consist of the following general steps. See the next section below
for a more specific example of installing onto a local kind cluster, including the exact commands to use for that case.
1. Install Pinniped. See [deploy/concierge/README.md](../deploy/concierge/README.md).
1. Download the Pinniped CLI from [Pinniped's github Releases page](https://github.com/vmware-tanzu/pinniped/releases/latest).
1. Generate a kubeconfig using the Pinniped CLI. Run `pinniped get-kubeconfig --help` for more information.
1. Run `kubectl` commands using the generated kubeconfig. Pinniped will automatically be used for authentication during those commands.
## Example of Deploying on kind
[kind](https://kind.sigs.k8s.io) is a tool for creating and managing Kubernetes clusters on your local machine
which uses Docker containers as the cluster's "nodes". This is a convenient way to try out Pinniped on a local
non-production cluster.
The following steps will deploy the latest release of Pinniped on kind using the local-user-authenticator component
as the identity provider.
<!-- The following image was uploaded to GitHub's CDN using this awesome trick: https://gist.github.com/vinkla/dca76249ba6b73c5dd66a4e986df4c8d -->
<p align="center" width="100%">
<img
src="https://user-images.githubusercontent.com/25013435/95272990-b2ea9780-07f6-11eb-994d-872e3cb68457.gif"
alt="Pinniped Installation Demo"
width="80%"
/>
</p>
1. Install the tools required for the following steps.
- [Install kind](https://kind.sigs.k8s.io/docs/user/quick-start/), if not already installed. e.g. `brew install kind` on MacOS.
- kind depends on Docker. If not already installed, [install Docker](https://docs.docker.com/get-docker/), e.g. `brew cask install docker` on MacOS.
- This demo requires `kubectl`, which comes with Docker, or can be [installed separately](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
- This demo requires a tool capable of generating a `bcrypt` hash in order to interact with
the webhook. The example below uses `htpasswd`, which is installed on most macOS systems, and can be
installed on some Linux systems via the `apache2-utils` package (e.g., `apt-get install
apache2-utils`).
- One of the steps below optionally uses `jq` to help find the latest release version number. It is not required.
Install `jq` if you would like, e.g. `brew install jq` on MacOS.
1. Create a new Kubernetes cluster using `kind create cluster`. Optionally provide a cluster name using the `--name` flag.
kind will automatically update your kubeconfig to point to the new cluster as a user with admin-like permissions.
1. Query GitHub's API for the git tag of the latest Pinniped
[release](https://github.com/vmware-tanzu/pinniped/releases/latest).
```bash
pinniped_version=$(curl https://api.github.com/repos/vmware-tanzu/pinniped/releases/latest -s | jq .name -r)
```
Alternatively, [any release version](https://github.com/vmware-tanzu/pinniped/releases)
number can be manually selected.
```bash
# Example of manually choosing a release version...
pinniped_version=v0.2.0
```
1. Deploy the local-user-authenticator app. This is a demo identity provider. In production, you would use your
real identity provider, and therefore would not need to deploy or configure local-user-authenticator.
```bash
kubectl apply -f https://github.com/vmware-tanzu/pinniped/releases/download/$pinniped_version/install-local-user-authenticator.yaml
```
The `install-local-user-authenticator.yaml` file includes the default deployment options.
If you would prefer to customize the available options, please
see [deploy/local-user-authenticator/README.md](../deploy/local-user-authenticator/README.md)
for instructions on how to deploy using `ytt`.
1. Create a test user named `pinny-the-seal` in the local-user-authenticator identity provider.
```bash
kubectl create secret generic pinny-the-seal \
--namespace local-user-authenticator \
--from-literal=groups=group1,group2 \
--from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://")
```
1. Fetch the auto-generated CA bundle for the local-user-authenticator's HTTP TLS endpoint.
```bash
kubectl get secret local-user-authenticator-tls-serving-certificate --namespace local-user-authenticator \
-o jsonpath={.data.caCertificate} \
| tee /tmp/local-user-authenticator-ca-base64-encoded
```
1. Deploy the Pinniped concierge.
```bash
kubectl apply -f https://github.com/vmware-tanzu/pinniped/releases/download/$pinniped_version/install-pinniped-concierge.yaml
```
The `install-pinniped-concierge.yaml` file includes the default deployment options.
If you would prefer to customize the available options, please see [deploy/concierge/README.md](../deploy/concierge/README.md)
for instructions on how to deploy using `ytt`.
1. Create a `WebhookAuthenticator` object to configure Pinniped to authenticate using local-user-authenticator.
```bash
cat <<EOF | kubectl create --namespace pinniped-concierge -f -
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: WebhookAuthenticator
metadata:
name: local-user-authenticator
spec:
endpoint: https://local-user-authenticator.local-user-authenticator.svc/authenticate
tls:
certificateAuthorityData: $(cat /tmp/local-user-authenticator-ca-base64-encoded)
EOF
```
1. Download the latest version of the Pinniped CLI binary for your platform
from Pinniped's [latest release](https://github.com/vmware-tanzu/pinniped/releases/latest).
1. Move the Pinniped CLI binary to your preferred filename and directory. Add the executable bit,
e.g. `chmod +x /usr/local/bin/pinniped`.
1. Generate a kubeconfig for the current cluster. Use `--token` to include a token which should
allow you to authenticate as the user that you created above.
```bash
pinniped get-kubeconfig --pinniped-namespace pinniped-concierge --token "pinny-the-seal:password123" --authenticator-type webhook --authenticator-name local-user-authenticator > /tmp/pinniped-kubeconfig
```
If you are using MacOS, you may get an error dialog that says
`“pinniped” cannot be opened because the developer cannot be verified`. Cancel this dialog, open System Preferences,
click on Security & Privacy, and click the Allow Anyway button next to the Pinniped message.
Run the above command again and another dialog will appear saying
`macOS cannot verify the developer of “pinniped”. Are you sure you want to open it?`.
Click Open to allow the command to proceed.
Note that the above command will print a warning to the screen. You can ignore this warning.
Pinniped tries to auto-discover the URL for the Kubernetes API server, but it is not able
to do so on kind clusters. The warning is just letting you know that the Pinniped CLI decided
to ignore the auto-discovery URL and instead use the URL from your existing kubeconfig.
1. Try using the generated kubeconfig to issue arbitrary `kubectl` commands as
the `pinny-the-seal` user.
```bash
kubectl --kubeconfig /tmp/pinniped-kubeconfig get pods -n pinniped-concierge
```
Because this user has no RBAC permissions on this cluster, the previous command
results in the error `Error from server (Forbidden): pods is forbidden: User "pinny-the-seal" cannot list resource "pods" in API group "" in the namespace "pinniped"`.
However, this does prove that you are authenticated and acting as the `pinny-the-seal` user.
1. As the admin user, create RBAC rules for the test user to give them permissions to perform actions on the cluster.
For example, grant the test user permission to view all cluster resources.
```bash
kubectl create clusterrolebinding pinny-can-read --clusterrole view --user pinny-the-seal
```
1. Use the generated kubeconfig to issue arbitrary `kubectl` commands as the `pinny-the-seal` user.
```bash
kubectl --kubeconfig /tmp/pinniped-kubeconfig get pods -n pinniped-concierge
```
The user has permission to list pods, so the command succeeds this time.
Pinniped has provided authentication into the cluster for your `kubectl` command! 🎉
1. Carry on issuing as many `kubectl` commands as you'd like as the `pinny-the-seal` user.
Each invocation will use Pinniped for authentication.
You may find it convenient to set the `KUBECONFIG` environment variable rather than passing `--kubeconfig` to each invocation.
```bash
export KUBECONFIG=/tmp/pinniped-kubeconfig
kubectl get namespaces
kubectl get pods -A
```
1. Profit! 💰

View File

@ -1,12 +0,0 @@
# `doc/img` README
## How to Update these Images
- [pinniped.svg](pinniped.svg) was generated using [`plantuml`](https://plantuml.com/).
To regenerate the image, run `plantuml -tsvg pinniped.txt` from this directory.
- [pinniped_architecture.svg](pinniped_architecture.svg) was created on [draw.io](https://draw.io).
It can be opened again for editing on that site by choosing "File" -> "Open from" -> "Device".
Because it includes embedded icons it should be exported using "File" -> "Export as" -> "SVG",
with the "Transparent Background", "Embed Images", and "Include a copy of my diagram" options
checked. The icons in this diagram are from their "CAE" shapes set.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 43 KiB

View File

@ -1,61 +0,0 @@
@startuml "pinniped"
!define K8S_BLUE #326CE5
!define K8S_SPRITES_URL https://raw.githubusercontent.com/michiel/plantuml-kubernetes-sprites/master/resource
!include K8S_SPRITES_URL/k8s-sprites-unlabeled-25pct.iuml
participant "User" as USER << ($pod{scale=0.30},K8S_BLUE) >> #LightGreen
participant "Kubectl" as KUBECTL << ($ing{scale=0.30},K8S_BLUE) >> #LightSteelBlue
participant "Proprietary CLI" as CLI << ($svc{scale=0.30},K8S_BLUE) >> #LightPink
participant "Pinniped" as PINNIPED << ($node{scale=0.30},K8S_BLUE) >> #LightGray
participant "TokenReview Webhook" as WEBHOOK << ($pod{scale=0.30},K8S_BLUE) >> #LightPink
participant "Kubernetes API" as API << ($node{scale=0.30},K8S_BLUE) >> #LightSteelBlue
legend
# <back:lightsalmon>Message contains upstream IDP credentials</back>
# <back:lightgreen>Message contains cluster-specific credentials</back>
end legend
USER -> KUBECTL : ""kubectl get pods""
activate KUBECTL
group Acquire cluster-specific credential
KUBECTL -> CLI : Get cluster-specific credential
activate CLI
CLI -> CLI : Retrieve upstream IDP credential in\norganization-specific way
CLI -> PINNIPED : <back:lightsalmon>""POST /apis/pinniped.dev/...""</back>
activate PINNIPED
PINNIPED -> WEBHOOK : <back:lightsalmon>""POST /authenticate""</back>
activate WEBHOOK
WEBHOOK -> PINNIPED : ""200 OK"" with user and group information
deactivate WEBHOOK
PINNIPED -> PINNIPED : Issue short-lived cluster-specific credential\nwith user and group information
PINNIPED -> CLI : <back:lightgreen>""200 OK""</back>
deactivate PINNIPED
CLI -> KUBECTL : Here is a cluster-specific credential
end
group Authenticate to cluster with cluster-specific credential
KUBECTL -> API : <back:lightgreen>""GET /api/v1/pods""</back>
activate API
API -> API : Glean user and group information from\ncluster-specific credential
API -> KUBECTL : ""200 OK"" with pods
deactivate API
deactivate KUBECTL
end
@enduml

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 79 KiB

View File

@ -1,68 +0,0 @@
<svg id="artwork" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 486 158"><metadata><?xpacket begin="" id="W5M0MpCehiHzreSzNTczkc9d"?>
<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="Adobe XMP Core 6.0-c002 79.164352, 2020/01/30-15:50:38 ">
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description rdf:about=""
xmlns:lr="http://ns.adobe.com/lightroom/1.0/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:xmp="http://ns.adobe.com/xap/1.0/"
xmlns:xmpMM="http://ns.adobe.com/xap/1.0/mm/"
xmlns:stEvt="http://ns.adobe.com/xap/1.0/sType/ResourceEvent#">
<lr:hierarchicalSubject>
<rdf:Bag>
<rdf:li>open source identity</rdf:li>
<rdf:li>open source identity|636062</rdf:li>
<rdf:li>open source identity|Pinniped</rdf:li>
</rdf:Bag>
</lr:hierarchicalSubject>
<dc:subject>
<rdf:Bag>
<rdf:li>open source identity</rdf:li>
<rdf:li>636062</rdf:li>
<rdf:li>Pinniped</rdf:li>
</rdf:Bag>
</dc:subject>
<xmp:MetadataDate>2020-09-17T16:06:40-07:00</xmp:MetadataDate>
<xmpMM:InstanceID>xmp.iid:932334bf-97ee-471a-96c9-c4e5ff526fe4</xmpMM:InstanceID>
<xmpMM:DocumentID>xmp.did:38396587-b56b-42c3-8f3e-f8e9c91f532b</xmpMM:DocumentID>
<xmpMM:OriginalDocumentID>xmp.did:38396587-b56b-42c3-8f3e-f8e9c91f532b</xmpMM:OriginalDocumentID>
<xmpMM:History>
<rdf:Seq>
<rdf:li>
<rdf:Description>
<stEvt:action>saved</stEvt:action>
<stEvt:instanceID>xmp.iid:38396587-b56b-42c3-8f3e-f8e9c91f532b</stEvt:instanceID>
<stEvt:when>2020-09-17T16:06:35-07:00</stEvt:when>
<stEvt:softwareAgent>Adobe Bridge 2020 (Macintosh)</stEvt:softwareAgent>
<stEvt:changed>/metadata</stEvt:changed>
</rdf:Description>
</rdf:li>
<rdf:li>
<rdf:Description>
<stEvt:action>saved</stEvt:action>
<stEvt:instanceID>xmp.iid:932334bf-97ee-471a-96c9-c4e5ff526fe4</stEvt:instanceID>
<stEvt:when>2020-09-17T16:06:40-07:00</stEvt:when>
<stEvt:softwareAgent>Adobe Bridge 2020 (Macintosh)</stEvt:softwareAgent>
<stEvt:changed>/metadata</stEvt:changed>
</rdf:Description>
</rdf:li>
</rdf:Seq>
</xmpMM:History>
</rdf:Description>
</rdf:RDF>
</x:xmpmeta>
<?xpacket end="w"?></metadata>
<defs><style>.cls-1{fill:#717073;}.cls-2{fill:#727174;}.cls-3{fill:#fff;}.cls-4{fill:#78be43;}.cls-5{fill:#1abfd3;}.cls-6{fill:#79459b;}.cls-7{fill:#1e4488;}.cls-8{fill:#218fcf;}</style></defs><path class="cls-1" d="M170.52,58.21h16.74a17.43,17.43,0,0,1,7.68,1.68,13.45,13.45,0,0,1,5.49,4.71,12.54,12.54,0,0,1,0,13.62,13.45,13.45,0,0,1-5.49,4.71,17.43,17.43,0,0,1-7.68,1.68H175.2V99.43h-4.68Zm15.9,22a13.23,13.23,0,0,0,8.19-2.34,8.21,8.21,0,0,0,0-12.84,13.23,13.23,0,0,0-8.19-2.34H175.2V80.17Z"/><path class="cls-1" d="M209.82,58.21h4.68V99.43h-4.68Z"/><path class="cls-1" d="M225.78,58.21h4.68L256,91.75V58.21h4.68V99.43H256L230.46,65.89V99.43h-4.68Z"/><path class="cls-1" d="M272,58.21h4.68l25.56,33.54V58.21h4.68V99.43h-4.68L276.66,65.89V99.43H272Z"/><path class="cls-1" d="M318.18,58.21h4.68V99.43h-4.68Z"/><path class="cls-1" d="M334.14,58.21h16.74a17.46,17.46,0,0,1,7.68,1.68,13.51,13.51,0,0,1,5.49,4.71,12.54,12.54,0,0,1,0,13.62,13.51,13.51,0,0,1-5.49,4.71,17.46,17.46,0,0,1-7.68,1.68H338.82V99.43h-4.68Zm15.9,22a13.23,13.23,0,0,0,8.19-2.34,8.21,8.21,0,0,0,0-12.84A13.23,13.23,0,0,0,350,62.65H338.82V80.17Z"/><path class="cls-1" d="M378.18,62.65V76.21h22.38v4.44H378.18V95H403v4.44H373.44V58.21H403v4.44Z"/><path class="cls-1" d="M411.12,58.21H425a24.6,24.6,0,0,1,11.54,2.64,19.73,19.73,0,0,1,7.92,7.32,21.2,21.2,0,0,1,0,21.27,19.66,19.66,0,0,1-7.92,7.35A24.6,24.6,0,0,1,425,99.43H411.12Zm13.92,37a19.21,19.21,0,0,0,9.08-2.1,15.61,15.61,0,0,0,6.25-5.82,17,17,0,0,0,0-16.89,15.68,15.68,0,0,0-6.25-5.79,19.21,19.21,0,0,0-9.08-2.1h-9.25v32.7Z"/><path class="cls-2" d="M91.14,25.5A52.5,52.5,0,1,0,143.64,78,52.51,52.51,0,0,0,91.14,25.5Zm0,95.33A42.83,42.83,0,1,1,134,78,42.83,42.83,0,0,1,91.14,120.83Z"/><circle class="cls-3" cx="91.33" cy="77.84" r="45.75"/><circle class="cls-4" cx="91.16" cy="76.71" r="8"/><path class="cls-5" d="M118.92,58.45l1.66,6.89,5.12-.66-3-12.42-.15-.5v0l-11.73-5.08-1.53,5,6.48,2.8L101.26,66.65a14.14,14.14,0,0,1,2.9,4.24Z"/><path class="cls-6" d="M66.46,54.41,73,51.61l-1.53-5L59.68,51.73v0l-.15.5-3,12.42,5.13.66,1.65-6.89L78.13,70.94A14.23,14.23,0,0,1,81,66.68Z"/><path class="cls-7" d="M57.49,82.82,59.21,76l-4.87-1.8L51.23,86.56l0,0,.31.42,8,9.94,3.66-3.66-4.47-5.51,19.82-4.35A14.23,14.23,0,0,1,77,78.54Z"/><path class="cls-7" d="M128,74.17,123.11,76l1.72,6.85-19.54-4.28a14.23,14.23,0,0,1-1.56,4.89l19.81,4.35-4.46,5.51L122.73,97l8-9.94.31-.42,0,0Z"/><path class="cls-6" d="M103.35,109l-7.08-.33-.79,5.11,12.76.58h.56l8.14-9.86-4.34-2.85-4.5,5.45-8.43-19a14.36,14.36,0,0,1-4.58,2.28Z"/><path class="cls-5" d="M74.24,107.08l-4.5-5.45-4.34,2.85,8.14,9.86h.56l12.76-.58-.78-5.11L79,109l8.26-18.57a14.28,14.28,0,0,1-4.59-2.27Z"/><path class="cls-8" d="M93.78,62.7V43.84L100.12,47l2.79-4.35L91.49,37,91,36.74h0l-11.44,5.7,2.8,4.37,6.33-3.15v19a14.59,14.59,0,0,1,2.49-.23A15,15,0,0,1,93.78,62.7Z"/></svg>

Before

Width:  |  Height:  |  Size: 5.4 KiB

View File

@ -1,32 +0,0 @@
# Project Scope
The Pinniped project is guided by the following principles.
* Pinniped lets you plug any external identitiy providers into
Kubernetes. These integrations follow enterprise-grade security principles.
* Pinniped is easy to install and use on any Kubernetes cluster via
distribution-specific integration mechanisms.
* Pinniped uses a declarative configuration via Kubernetes APIs.
* Pinniped provides optimal user experience when authenticating to many
clusters at one time.
* Pinniped provides enterprise-grade security posture via secure defaults and
revocable or very short-lived credentials.
* Where possible, Pinniped will contribute ideas and code to upstream
Kubernetes.
When contributing to Pinniped, please consider whether your contribution follows
these guiding principles.
## Out Of Scope
The following items are out of scope for the Pinniped project.
* Authorization.
* Standalone identity provider for general use.
* Machine-to-machine (service) identity.
* Running outside of Kubernetes.
## Roadmap
More details coming soon!
For more details on proposing features and bugs, check out our
[contributing](../CONTRIBUTING.md) doc.

View File

@ -23,7 +23,10 @@ local_resource(
#
# Render the IDP installation manifest using ytt.
k8s_yaml(local(['ytt','--file', '../../../test/deploy/dex']))
k8s_yaml(local(['ytt',
'--file', '../../../test/deploy/dex',
'--data-value', 'supervisor_redirect_uri=https://pinniped-supervisor-clusterip.supervisor.svc.cluster.local/some/path/callback',
]))
# Tell tilt to watch all of those files for changes.
watch_file('../../../test/deploy/dex')

View File

@ -184,6 +184,10 @@ if ! tilt_mode; then
log_note "Deploying Dex to the cluster..."
ytt --file . >"$manifest"
ytt --file . \
--data-value "supervisor_redirect_uri=https://pinniped-supervisor-clusterip.supervisor.svc.cluster.local/some/path/callback" \
>"$manifest"
kubectl apply --dry-run=client -f "$manifest" # Validate manifest schema.
kapp deploy --yes --app dex --diff-changes --file "$manifest"

View File

@ -225,7 +225,7 @@ func addCSRFSetCookieHeader(w http.ResponseWriter, csrfValue csrftoken.CSRFToken
Name: oidc.CSRFCookieName,
Value: encodedCSRFValue,
HttpOnly: true,
SameSite: http.SameSiteStrictMode,
SameSite: http.SameSiteLaxMode,
Secure: true,
Path: "/",
})

View File

@ -753,7 +753,7 @@ func TestAuthorizationEndpoint(t *testing.T) {
if test.wantCSRFValueInCookieHeader != "" {
require.Len(t, rsp.Header().Values("Set-Cookie"), 1)
actualCookie := rsp.Header().Get("Set-Cookie")
regex := regexp.MustCompile("__Host-pinniped-csrf=([^;]+); Path=/; HttpOnly; Secure; SameSite=Strict")
regex := regexp.MustCompile("__Host-pinniped-csrf=([^;]+); Path=/; HttpOnly; Secure; SameSite=Lax")
submatches := regex.FindStringSubmatch(actualCookie)
require.Len(t, submatches, 2)
captured := submatches[1]

View File

@ -1,4 +1,4 @@
# doc/img README
# site/content/docs/img README
## How to Update these Images

View File

@ -6,7 +6,7 @@
<div class="wrapper grid two">
<div class="col">
<p class="strong">Introduction to Pinniped</p>
<p><a href="https://github.com/vmware-tanzu/pinniped/blob/main/doc/demo.md">Learn how Pinniped</a> provides identity services to Kubernetes</p>
<p><a href="https://pinniped.dev/docs/demo/">Learn how Pinniped</a> provides identity services to Kubernetes</p>
</div>
<div class="col">
<p class="strong">How do you use Pinniped?</p>

View File

@ -28,7 +28,7 @@ staticClients:
name: 'Pinniped Supervisor'
secret: pinniped-supervisor-secret
redirectURIs:
- https://pinniped-supervisor-clusterip.supervisor.svc.cluster.local/some/path/callback
- #@ data.values.supervisor_redirect_uri
enablePasswordDB: true
staticPasswords:
- username: "pinny"

View File

@ -20,6 +20,9 @@ spec:
labels:
app: proxy
spec:
volumes:
- name: log-dir
emptyDir: {}
containers:
- name: proxy
image: docker.io/getpinniped/test-forward-proxy
@ -34,6 +37,9 @@ spec:
limits:
cpu: "10m"
memory: "64Mi"
volumeMounts:
- name: log-dir
mountPath: "/var/log/squid/"
readinessProbe:
tcpSocket:
port: http
@ -41,6 +47,16 @@ spec:
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 2
- name: accesslogs
image: debian:10.6-slim
command:
- "/bin/sh"
- "-c"
args:
- tail -F /var/log/squid/access.log
volumeMounts:
- name: log-dir
mountPath: "/var/log/squid/"
---
apiVersion: v1
kind: Service

View File

@ -15,3 +15,5 @@ ports:
#! External port where the proxy ends up exposed on localhost during tests. This value comes from
#! our Kind configuration which maps 127.0.0.1:12346 to port 31235 on the Kind worker node.
local: 12346
supervisor_redirect_uri: ""

View File

@ -34,8 +34,9 @@ import (
func TestSupervisorLogin(t *testing.T) {
env := library.IntegrationEnv(t)
// If anything in this test crashes, dump out the supervisor pod logs.
defer library.DumpLogs(t, env.SupervisorNamespace)
// If anything in this test crashes, dump out the supervisor and proxy pod logs.
defer library.DumpLogs(t, env.SupervisorNamespace, "")
defer library.DumpLogs(t, "dex", "app=proxy")
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
@ -57,9 +58,13 @@ func TestSupervisorLogin(t *testing.T) {
TLSClientConfig: &tls.Config{RootCAs: ca.Pool()},
Proxy: func(req *http.Request) (*url.URL, error) {
if env.Proxy == "" {
t.Logf("passing request for %s with no proxy", req.URL)
return nil, nil
}
return url.Parse(env.Proxy)
proxyURL, err := url.Parse(env.Proxy)
require.NoError(t, err)
t.Logf("passing request for %s through proxy %s", req.URL, proxyURL.String())
return proxyURL, nil
},
}}
@ -106,7 +111,7 @@ func TestSupervisorLogin(t *testing.T) {
assert.Eventually(t, func() bool {
discovery, err = oidc.NewProvider(oidc.ClientContext(ctx, httpClient), downstream.Spec.Issuer)
return err == nil
}, 60*time.Second, 1*time.Second)
}, 30*time.Second, 200*time.Millisecond)
require.NoError(t, err)
// Start a callback server on localhost.

View File

@ -254,7 +254,7 @@ func CreateClientCredsSecret(t *testing.T, clientID string, clientSecret string)
env := IntegrationEnv(t)
return CreateTestSecret(t,
env.SupervisorNamespace,
"test-client-creds-",
"test-client-creds",
"secrets.pinniped.dev/oidc-client",
map[string]string{
"clientID": clientID,

View File

@ -15,7 +15,7 @@ import (
)
// DumpLogs is meant to be called in a `defer` to dump the logs of components in the cluster on a test failure.
func DumpLogs(t *testing.T, namespace string) {
func DumpLogs(t *testing.T, namespace string, labelSelector string) {
// Only trigger on failed tests.
if !t.Failed() {
return
@ -26,7 +26,7 @@ func DumpLogs(t *testing.T, namespace string) {
defer cancel()
logTailLines := int64(40)
pods, err := kubeClient.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{})
pods, err := kubeClient.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{LabelSelector: labelSelector})
require.NoError(t, err)
for _, pod := range pods.Items {