ContainerImage.Pinniped/deploy/deployment.yaml
Ryan Richard 08961919b5 Fix a garbage collection bug
- Previously the golang code would create a Service and an APIService.
  The APIService would be given an owner reference which pointed to
  the namespace in which the app was installed.
- This prevented the app from being uninstalled. The namespace would
  refuse to delete, so `kapp delete` or `kubectl delete` would fail.
- The new approach is to statically define the Service and an APIService
  in the deployment.yaml, except for the caBundle of the APIService.
  Then the golang code will perform an update to add the caBundle at
  runtime.
- When the user uses `kapp deploy` or `kubectl apply` either tool will
  notice that the caBundle is not declared in the yaml and will
  therefore avoid editing that field.
- When the user uses `kapp delete` or `kubectl delete` either tool will
  destroy the objects because they are statically declared with names
  in the yaml, just like all of the other objects. There are no
  ownerReferences used, so nothing should prevent the namespace from
  being deleted.
- This approach also allows us to have less golang code to maintain.
- In the future, if our golang controllers want to dynamically add
  an Ingress or other objects, they can still do that. An Ingress
  would point to our statically defined Service as its backend.

Signed-off-by: Andrew Keesler <akeesler@vmware.com>
2020-08-04 16:46:27 -07:00

138 lines
4.5 KiB
YAML

#@ load("@ytt:data", "data")
---
apiVersion: v1
kind: Namespace
metadata:
name: #@ data.values.namespace
labels:
name: #@ data.values.namespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: #@ data.values.app_name + "-service-account"
namespace: #@ data.values.namespace
---
apiVersion: v1
kind: ConfigMap
metadata:
name: #@ data.values.app_name + "-config"
namespace: #@ data.values.namespace
labels:
app: #@ data.values.app_name
data:
#@yaml/text-templated-strings
placeholder-name.yaml: |
discovery:
url: (@= data.values.discovery_url or "null" @)
webhook:
url: (@= data.values.webhook_url @)
caBundle: (@= data.values.webhook_ca_bundle @)
---
#! TODO set up healthy, ready, etc. probes correctly for our deployment
#! TODO set the priority-critical-urgent on our deployment to ask kube to never let it die
#! TODO set resource minimums (e.g. 512MB RAM) on the deployment to make sure we get scheduled onto a reasonable node
apiVersion: apps/v1
kind: Deployment
metadata:
name: #@ data.values.app_name + "-deployment"
namespace: #@ data.values.namespace
labels:
app: #@ data.values.app_name
spec:
replicas: 1 #! TODO more than one replica for high availability, and share the same serving certificate among them (maybe using client-go leader election)
selector:
matchLabels:
app: #@ data.values.app_name
template:
metadata:
labels:
app: #@ data.values.app_name
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: #@ data.values.app_name + "-service-account"
containers:
- name: placeholder-name
#@ if data.values.image_digest:
image: #@ data.values.image_repo + "@" + data.values.image_digest
#@ else:
image: #@ data.values.image_repo + ":" + data.values.image_tag
#@ end
imagePullPolicy: IfNotPresent
command:
- ./placeholder-name-server
args:
- --config=/etc/config/placeholder-name.yaml
- --downward-api-path=/etc/podinfo
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: podinfo
mountPath: /etc/podinfo
- name: k8s-certs
mountPath: /etc/kubernetes/pki
volumes:
- name: config-volume
configMap:
name: #@ data.values.app_name + "-config"
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "namespace"
fieldRef:
fieldPath: metadata.namespace
- name: k8s-certs
hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
#! "system-cluster-critical" cannot be used outside the kube-system namespace until Kubernetes >= 1.17,
#! so we skip setting this for now (see https://github.com/kubernetes/kubernetes/issues/60596).
#! priorityClassName: system-cluster-critical
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: v1
kind: Service
metadata:
name: placeholder-name-api #! the golang code assumes this specific name as part of the common name during cert generation
namespace: #@ data.values.namespace
labels:
app: #@ data.values.app_name
spec:
type: ClusterIP
selector:
app: #@ data.values.app_name
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1alpha1.placeholder.suzerain-io.github.io
labels:
app: #@ data.values.app_name
spec:
version: v1alpha1
group: placeholder.suzerain-io.github.io
groupPriorityMinimum: 2500 #! TODO what is the right value? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#apiservicespec-v1beta1-apiregistration-k8s-io
versionPriority: 10 #! TODO what is the right value? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#apiservicespec-v1beta1-apiregistration-k8s-io
#! caBundle: Do not include this key here. Starts out null, will be updated/owned by the golang code.
service:
name: placeholder-name-api
namespace: #@ data.values.namespace
port: 443