Add Terraform with Equinix Metal:

This gets the Tinkerbell Sandbox up and running with
Terraform on Equinix Metal.

Signed-off-by: Jacob Weinstock <jakobweinstock@gmail.com>
This commit is contained in:
Jacob Weinstock 2021-08-19 10:46:16 -06:00
parent d6af9a49af
commit afc878ad88
14 changed files with 369 additions and 207 deletions

1
.gitignore vendored
View File

@ -8,3 +8,4 @@ deploy/compose/state/webroot/workflow/*
!deploy/compose/state/webroot/workflow/.keep !deploy/compose/state/webroot/workflow/.keep
deploy/compose/state/webroot/*.gz deploy/compose/state/webroot/*.gz
workflow_id.txt workflow_id.txt
compose.tar.gz

View File

@ -1,14 +0,0 @@
# Quick-Starts
The following quick-start guides will walk you through standing up the Tinkerbell stack.
There are a few options for this.
Pick the one that works best for you.
## Options
- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md)
- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md)
- [Docker Compose](docs/quickstarts/COMPOSE.md)
- [Terraform and Equinix Metal](docs/quickstarts/TERRAFORMEM.md)
- [Kubernetes](docs/quickstarts/KUBERNETES.md)
- [Multipass](docs/quickstarts/MULTIPASS.md)

47
README.md Normal file
View File

@ -0,0 +1,47 @@
# Quick-Starts
The following quick-start guides will walk you through standing up the Tinkerbell stack.
There are a few options for this.
Pick the one that works best for you.
## Options
- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md)
- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md)
- [Docker Compose](docs/quickstarts/COMPOSE.md)
- [Terraform and Equinix Metal](docs/quickstarts/TERRAFORMEM.md)
- [Kubernetes](docs/quickstarts/KUBERNETES.md)
- [Multipass](docs/quickstarts/MULTIPASS.md)
## Next Steps
Now that you have a Tinkerbell stack up and running, you can start provisioning machines.
Tinkerbell.org has a [list of guides](https://docs.tinkerbell.org/deploying-operating-systems/the-deployment/) for provisioning machines.
You can also create your own.
The following docs will help you get started.
1. [Create Hardware Data](https://docs.tinkerbell.org/setup/local-vagrant/#creating-the-workers-hardware-data)
2. [Create a Template](https://docs.tinkerbell.org/setup/local-vagrant/#creating-a-template)
3. [Create a Workflow](https://docs.tinkerbell.org/setup/local-vagrant/#creating-the-workflow)
### In the Sandbox
1. Create your own templates
```bash
docker exec -i compose_tink-cli_1 tink template create < ./custom-template.yaml
```
2. Upload any container images you want to use in the templates to the internal registry
```bash
docker run -it --rm quay.io/containers/skopeo copy --all --dest-tls-verify=false --dest-creds="admin":"Admin1234" docker://hello-world docker://192.168.50.4/hello-world
```
3. Create a workflow
```bash
docker exec -i compose_tink-cli_1 tink workflow create -t <TEMPLATE ID> -r '{"device_1":"08:00:27:00:00:01"}')
```
4. Restart the machine to provision (if using the vagrant sandbox test machine this is done by running vagrant destroy -f machine1 && vagrant up machine1

View File

@ -1,13 +1,13 @@
{ {
"id": "${id}", "id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94",
"metadata": { "metadata": {
"facility": { "facility": {
"facility_code": "${facility_code}", "facility_code": "onprem",
"plan_slug": "${plan_slug}", "plan_slug": "c2.medium.x86",
"plan_version_slug": "" "plan_version_slug": ""
}, },
"instance": {}, "instance": {},
"state": "" "state": "provisioning"
}, },
"network": { "network": {
"interfaces": [ "interfaces": [
@ -15,11 +15,11 @@
"dhcp": { "dhcp": {
"arch": "x86_64", "arch": "x86_64",
"ip": { "ip": {
"address": "${address}", "address": "192.168.50.43",
"gateway": "192.168.1.1", "gateway": "192.168.50.4",
"netmask": "255.255.255.248" "netmask": "255.255.255.0"
}, },
"mac": "${mac}", "mac": "08:00:27:9e:f5:3a",
"uefi": false "uefi": false
}, },
"netboot": { "netboot": {

View File

@ -0,0 +1,91 @@
version: "0.1"
name: debian_Focal
global_timeout: 1800
tasks:
- name: "os-installation"
worker: "{{.device_1}}"
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
actions:
- name: "stream-ubuntu-image"
image: image2disk:v1.0.0
timeout: 600
environment:
DEST_DISK: /dev/sda
IMG_URL: "http://192.168.50.4:8080/focal-server-cloudimg-amd64.raw.gz"
COMPRESSED: true
- name: "grow-partition"
image: cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "growpart /dev/sda 1 && resize2fs /dev/sda1"
- name: "fix-serial"
image: cexec:v1.0.0
timeout: 90
pid: host
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "sed -e 's|ttyS0|ttyS1,115200|g' -i /etc/default/grub.d/50-cloudimg-settings.cfg ; update-grub"
- name: "install-openssl"
image: cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "apt -y update && apt -y install openssl"
- name: "create-user"
image: cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "useradd -p $(openssl passwd -1 tink) -s /bin/bash -d /home/tink/ -m -G sudo tink"
- name: "enable-ssh"
image: cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "ssh-keygen -A; systemctl enable ssh.service; sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config"
- name: "disable-apparmor"
image: cexec:v1.0.0
timeout: 90
environment:
BLOCK_DEVICE: /dev/sda1
FS_TYPE: ext4
CHROOT: y
DEFAULT_INTERPRETER: "/bin/sh -c"
CMD_LINE: "systemctl disable apparmor; systemctl disable snapd"
- name: "write-netplan"
image: writefile:v1.0.0
timeout: 90
environment:
DEST_DISK: /dev/sda1
FS_TYPE: ext4
DEST_PATH: /etc/netplan/config.yaml
CONTENTS: |
network:
version: 2
renderer: networkd
ethernets:
enp1s0f0:
dhcp4: true
UID: 0
GID: 0
MODE: 0644
DIRMODE: 0755

View File

@ -2,3 +2,4 @@
terraform.tfstate terraform.tfstate
terraform.tfstate.backup terraform.tfstate.backup
terraform.tfvars terraform.tfvars
.terraform.lock.hcl

View File

@ -1,66 +0,0 @@
#!/usr/bin/env bash
YUM="yum"
APT="apt"
PIP3="pip3"
YUM_CONFIG_MGR="yum-config-manager"
WHICH_YUM=$(command -v $YUM)
WHICH_APT=$(command -v $APT)
YUM_INSTALL="$YUM install"
APT_INSTALL="$APT install"
PIP3_INSTALL="$PIP3 install"
declare -a YUM_LIST=("https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm"
"docker-ce"
"docker-ce-cli"
"epel-release"
"pass"
"python3")
declare -a APT_LIST=("docker"
"docker-compose" "pass")
add_yum_repo() (
$YUM_CONFIG_MGR --add-repo https://download.docker.com/linux/centos/docker-ce.repo
)
update_yum() (
$YUM_INSTALL -y yum-utils
add_yum_repo
)
update_apt() (
$APT update
DEBIAN_FRONTEND=noninteractive $APT --yes --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" upgrade
)
restart_docker_service() (
service docker restart
)
install_yum_packages() (
$YUM_INSTALL "${YUM_LIST[@]}" -y
)
install_pip3_packages() (
$PIP3_INSTALL docker-compose
)
install_apt_packages() (
$APT_INSTALL "${APT_LIST[@]}" -y
)
main() (
if [[ -n $WHICH_YUM ]]; then
update_yum
install_yum_packages
install_pip3_packages
restart_docker_service
elif [[ -n $WHICH_APT ]]; then
update_apt
install_apt_packages
restart_docker_service
else
echo "Unknown platform. Error while installing the required packages"
exit 1
fi
)
main

View File

@ -3,7 +3,7 @@ terraform {
required_providers { required_providers {
metal = { metal = {
source = "equinix/metal" source = "equinix/metal"
version = "1.0.0" version = "3.1.0"
} }
null = { null = {
source = "hashicorp/null" source = "hashicorp/null"
@ -27,72 +27,9 @@ resource "metal_vlan" "provisioning_vlan" {
project_id = var.project_id project_id = var.project_id
} }
# Create a device and add it to tf_project_1
resource "metal_device" "tink_provisioner" {
hostname = "tink-provisioner"
plan = var.device_type
facilities = [var.facility]
operating_system = "ubuntu_18_04"
billing_cycle = "hourly"
project_id = var.project_id
user_data = file("install_package.sh")
}
resource "null_resource" "tink_directory" {
connection {
type = "ssh"
user = var.ssh_user
host = metal_device.tink_provisioner.network[0].address
}
provisioner "remote-exec" {
inline = [
"mkdir -p /root/tink/deploy"
]
}
provisioner "file" {
source = "../../setup.sh"
destination = "/root/tink/setup.sh"
}
provisioner "file" {
source = "../../generate-env.sh"
destination = "/root/tink/generate-env.sh"
}
provisioner "file" {
source = "../../current_versions.sh"
destination = "/root/tink/current_versions.sh"
}
provisioner "file" {
source = "../../deploy"
destination = "/root/tink"
}
provisioner "file" {
source = "nat_interface"
destination = "/root/tink/.nat_interface"
}
provisioner "remote-exec" {
inline = [
"chmod +x /root/tink/*.sh /root/tink/deploy/tls/*.sh"
]
}
}
resource "metal_device_network_type" "tink_provisioner_network_type" {
device_id = metal_device.tink_provisioner.id
type = "hybrid"
}
# Create a device and add it to tf_project_1 # Create a device and add it to tf_project_1
resource "metal_device" "tink_worker" { resource "metal_device" "tink_worker" {
count = var.worker_count hostname = "tink-worker"
hostname = "tink-worker-${count.index}"
plan = var.device_type plan = var.device_type
facilities = [var.facility] facilities = [var.facility]
operating_system = "custom_ipxe" operating_system = "custom_ipxe"
@ -103,12 +40,36 @@ resource "metal_device" "tink_worker" {
} }
resource "metal_device_network_type" "tink_worker_network_type" { resource "metal_device_network_type" "tink_worker_network_type" {
count = var.worker_count device_id = metal_device.tink_worker.id
device_id = metal_device.tink_worker[count.index].id
type = "layer2-individual" type = "layer2-individual"
} }
# Attach VLAN to worker
resource "metal_port_vlan_attachment" "worker" {
depends_on = [metal_device_network_type.tink_worker_network_type]
device_id = metal_device.tink_worker.id
port_name = "eth0"
vlan_vnid = metal_vlan.provisioning_vlan.vxlan
}
# Create a device and add it to tf_project_1
resource "metal_device" "tink_provisioner" {
hostname = "tink-provisioner"
plan = var.device_type
facilities = [var.facility]
operating_system = "ubuntu_20_04"
billing_cycle = "hourly"
project_id = var.project_id
user_data = file("setup.sh")
}
resource "metal_device_network_type" "tink_provisioner_network_type" {
device_id = metal_device.tink_provisioner.id
type = "hybrid"
}
# Attach VLAN to provisioner # Attach VLAN to provisioner
resource "metal_port_vlan_attachment" "provisioner" { resource "metal_port_vlan_attachment" "provisioner" {
depends_on = [metal_device_network_type.tink_provisioner_network_type] depends_on = [metal_device_network_type.tink_provisioner_network_type]
@ -117,40 +78,30 @@ resource "metal_port_vlan_attachment" "provisioner" {
vlan_vnid = metal_vlan.provisioning_vlan.vxlan vlan_vnid = metal_vlan.provisioning_vlan.vxlan
} }
# Attach VLAN to worker
resource "metal_port_vlan_attachment" "worker" {
count = var.worker_count
depends_on = [metal_device_network_type.tink_worker_network_type]
device_id = metal_device.tink_worker[count.index].id
port_name = "eth0"
vlan_vnid = metal_vlan.provisioning_vlan.vxlan
}
data "template_file" "worker_hardware_data" {
count = var.worker_count
template = file("${path.module}/hardware_data.tpl")
vars = {
id = metal_device.tink_worker[count.index].id
facility_code = metal_device.tink_worker[count.index].deployed_facility
plan_slug = metal_device.tink_worker[count.index].plan
address = "192.168.1.${count.index + 5}"
mac = metal_device.tink_worker[count.index].ports[1].mac
}
}
resource "null_resource" "hardware_data" {
count = var.worker_count
depends_on = [null_resource.tink_directory]
resource "null_resource" "setup" {
connection { connection {
type = "ssh" type = "ssh"
user = var.ssh_user user = "root"
host = metal_device.tink_provisioner.network[0].address host = metal_device.tink_provisioner.network[0].address
private_key = file("~/.ssh/id_rsa")
}
# need to tar the compose directory because the 'provisioner "file"' does not preserve file permissions
provisioner "local-exec" {
command = "cd ../ && tar zcvf compose.tar.gz compose"
} }
provisioner "file" { provisioner "file" {
content = data.template_file.worker_hardware_data[count.index].rendered source = "../compose.tar.gz"
destination = "/root/tink/deploy/hardware-data-${count.index}.json" destination = "/root/compose.tar.gz"
}
provisioner "remote-exec" {
inline = [
"cd /root && tar zxvf /root/compose.tar.gz -C /root/sandbox",
"cd /root/sandbox/compose && TINKERBELL_CLIENT_MAC=${metal_device.tink_worker.ports[1].mac} TINKERBELL_TEMPLATE_MANIFEST=/manifests/template/ubuntu-equinix-metal.yaml TINKERBELL_HARDWARE_MANIFEST=/manifests/hardware/hardware-equinix-metal.json docker-compose up -d"
]
} }
} }

View File

@ -1 +0,0 @@
bond0

View File

@ -1,15 +1,7 @@
output "provisioner_dns_name" {
value = "${split("-", metal_device.tink_provisioner.id)[0]}.packethost.net"
}
output "provisioner_ip" { output "provisioner_ip" {
value = metal_device.tink_provisioner.network[0].address value = metal_device.tink_provisioner.network[0].address
} }
output "worker_mac_addr" {
value = formatlist("%s", metal_device.tink_worker[*].ports[1].mac)
}
output "worker_sos" { output "worker_sos" {
value = formatlist("%s@sos.%s.platformequinix.com", metal_device.tink_worker[*].id, metal_device.tink_worker[*].deployed_facility) value = formatlist("%s@sos.%s.platformequinix.com", metal_device.tink_worker[*].id, metal_device.tink_worker.deployed_facility)
} }

66
deploy/terraform/setup.sh Executable file
View File

@ -0,0 +1,66 @@
#!/usr/bin/env bash
set -xo pipefail
install_docker() {
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
update_apt
DEBIAN_FRONTEND=noninteractive apt install -y apt-transport-https ca-certificates curl gnupg-agent gnupg2 software-properties-common docker-ce docker-ce-cli containerd.io
}
install_docker_compose() {
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
}
update_apt() (
$APT update
DEBIAN_FRONTEND=noninteractive $APT --yes --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" upgrade
)
restart_docker_service() (
service docker restart
)
# get_second_interface_from_bond0 returns the second interface of the bond0 interface
get_second_interface_from_bond0() {
local return_value
return_value=$(cut -d' ' -f2 /sys/class/net/bond0/bonding/slaves | xargs)
echo "${return_value}"
}
# setup_layer2_network removes the second interface from bond0 and uses it for the layer2 network
# https://metal.equinix.com/developers/docs/layer2-networking/hybrid-unbonded-mode/
setup_layer2_network() {
local layer2_interface="$1"
#local ip_addr="$2"
ifenslave -d bond0 "${layer2_interface}"
#ip addr add ${ip_addr}/24 dev "${layer2_interface}"
ip addr add 192.168.50.4/24 dev "${layer2_interface}"
ip link set dev "${layer2_interface}" up
}
# make_host_gw_server makes the host a gateway server
make_host_gw_server() {
local incoming_interface="$1"
local outgoing_interface="$2"
iptables -t nat -A POSTROUTING -o "${outgoing_interface}" -j MASQUERADE
iptables -A FORWARD -i "${outgoing_interface}" -o "${incoming_interface}" -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i "${incoming_interface}" -o "${outgoing_interface}" -j ACCEPT
}
main() (
#local provisioner_ip="$1"
install_docker
install_docker_compose
restart_docker_service
mkdir -p /root/sandbox/compose
local layer2_interface
layer2_interface="$(get_second_interface_from_bond0)"
setup_layer2_network "${layer2_interface}" #"${provisioner_ip}"
make_host_gw_server "${layer2_interface}" "bond0"
)
main #"$1"

View File

@ -8,11 +8,6 @@ variable "project_id" {
type = string type = string
} }
variable "worker_count" {
description = "Number of Workers"
type = number
default = 1
}
variable "facility" { variable "facility" {
description = "Packet facility to provision in" description = "Packet facility to provision in"
type = string type = string
@ -24,9 +19,3 @@ variable "device_type" {
description = "Type of device to provision" description = "Type of device to provision"
default = "c3.small.x86" default = "c3.small.x86"
} }
variable "ssh_user" {
description = "Username that will be used to transfer file from your local environment to the provisioner"
type = string
default = "root"
}

View File

@ -1,3 +1,108 @@
# Quick start guide for Terraform on Equinix Metal # Quick start guide for Terraform on Equinix Metal
> coming soon... This option will stand up the provisioner on a Bare Metal machine using Terraform with Equinix Metal.
This option will also show you how to create a machine to provision.
## Prerequisites
- [Terraform](https://www.vagrantup.com/downloads) is installed
## Steps
1. Clone this repository
```bash
git clone https://github.com/tinkerbell/sandbox.git
cd sandbox
```
2. Set your Equinix Metal project id and access token
```bash
cd deploy/terraform
cat << EOF > terraform.tfvars
metal_api_token = "awegaga4gs4g"
project_id = "235-23452-245-345"
EOF
```
3. Start the provisioner
```bash
terraform init
terraform apply
# This process will take about 5-10 minutes.
# Most of the time will be to download OSIE.
# OSIE is about 2GB in size and the Ubuntu Focal image is about 500MB
```
4. Reboot the machine
In the [Equinix Metal Web UI](https://console.equinix.com), find the `tink_worker` and reboot it.
Or if you have the [Equinix Metal CLI](https://github.com/equinix/metal-cli) installed run the following:
```bash
metal device reboot -i $(terraform show -json | jq -r '.values.root_module.resources[1].values.id')
```
5. Watch the provision complete
```bash
# log in to the provisioner
ssh root@139.178.69.231
# watch the workflow events and status for workflow completion
# once the workflow is complete (see the expected output below for completion), move on to the next step
wid=$(cat sandbox/compose/manifests/workflow/workflow_id.txt); docker exec -it compose_tink-cli_1 watch "tink workflow events ${wid}; tink workflow state ${wid}"
```
<details>
<summary>expected output</summary>
```bash
+--------------------------------------+-----------------+---------------------+----------------+---------------------------------+---------------+
| WORKER ID | TASK NAME | ACTION NAME | EXECUTION TIME | MESSAGE | ACTION STATUS |
+--------------------------------------+-----------------+---------------------+----------------+---------------------------------+---------------+
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | stream-ubuntu-image | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | stream-ubuntu-image | 15 | finished execution successfully | STATE_SUCCESS |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | install-openssl | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | install-openssl | 1 | finished execution successfully | STATE_SUCCESS |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | create-user | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | create-user | 0 | finished execution successfully | STATE_SUCCESS |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | enable-ssh | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | enable-ssh | 0 | finished execution successfully | STATE_SUCCESS |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | disable-apparmor | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | disable-apparmor | 0 | finished execution successfully | STATE_SUCCESS |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | write-netplan | 0 | Started execution | STATE_RUNNING |
| 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 | os-installation | write-netplan | 0 | finished execution successfully | STATE_SUCCESS |
+--------------------------------------+-----------------+---------------------+----------------+---------------------------------+---------------+
+----------------------+--------------------------------------+
| FIELD NAME | VALUES |
+----------------------+--------------------------------------+
| Workflow ID | 3107919b-e59d-11eb-bf99-0242ac120005 |
| Workflow Progress | 100% |
| Current Task | os-installation |
| Current Action | write-netplan |
| Current Worker | 0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94 |
| Current Action State | STATE_SUCCESS |
+----------------------+--------------------------------------+
```
</details>
6. Reboot the machine
Now reboot the `tink-worker` via the [Equinix Metal Web UI](https://console.equinix.com), or if you have the [Equinix Metal CLI](https://github.com/equinix/metal-cli) installed run the following:
```bash
metal device reboot -i $(terraform show -json | jq -r '.values.root_module.resources[1].values.id')
```
7. Login to the machine
The machine has been provisioned with Ubuntu Focal.
Wait for the reboot to complete and then you can SSH into it.
```bash
# crtl-c to exit the watch
ssh tink@192.168.50.43 # user/pass => tink/tink
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB