549e540671
This fixes the vagrant based sandbox from not working. This was particularly annoying to track down because of not having `set -x` in `setup.sh` but what looks like xtrace output in stderr. The xtrace output on stderr was actually from the `generate_certificates` container: ``` provisioner: 2021/04/26 21:22:32 [INFO] signed certificate with serial number 142120228981443865252746731124927082232998754394 provisioner: + cat provisioner: server.pem provisioner: ca.pem provisioner: + cmp provisioner: -s provisioner: bundle.pem.tmp provisioner: bundle.pem provisioner: + mv provisioner: bundle.pem.tmp provisioner: bundle.pem provisioner: Error: No such object: ==> provisioner: Clearing any previously set forwarded ports... ==> provisioner: Removing domain... The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. ``` I ended up doubting the `if ! cmp` blocks until I added `set -euxo pipefail` and the issue was pretty obviously in docker-compose land. ``` $ vagrant destroy -f; vagrant up provisioner ==> worker: Domain is not created. Please run `vagrant up` first. ==> provisioner: Domain is not created. Please run `vagrant up` first. Bringing machine 'provisioner' up with 'libvirt' provider... ==> provisioner: Checking if box 'tinkerbelloss/sandbox-ubuntu1804' version '0.1.0' is up to date... ==> provisioner: Creating image (snapshot of base box volume). ==> provisioner: Creating domain with the following settings... ... provisioner: 2021/04/27 18:20:13 [INFO] signed certificate with serial number 138080403356863347716407921665793913032297783787 provisioner: + cat server.pem ca.pem provisioner: + cmp -s bundle.pem.tmp bundle.pem provisioner: + mv bundle.pem.tmp bundle.pem provisioner: + local certs_dir=/etc/docker/certs.d/192.168.1.1 provisioner: + cmp --quiet /vagrant/deploy/state/certs/ca.pem /vagrant/deploy/state/webroot/workflow/ca.pem provisioner: + cp /vagrant/deploy/state/certs/ca.pem /vagrant/deploy/state/webroot/workflow/ca.pem provisioner: + cmp --quiet /vagrant/deploy/state/certs/ca.pem /etc/docker/certs.d/192.168.1.1/tinkerbell.crt provisioner: + [[ -d /etc/docker/certs.d/192.168.1.1/ ]] provisioner: + cp /vagrant/deploy/state/certs/ca.pem /etc/docker/certs.d/192.168.1.1/tinkerbell.crt provisioner: + setup_docker_registry provisioner: + local registry_images=/vagrant/deploy/state/registry provisioner: + [[ -d /vagrant/deploy/state/registry ]] provisioner: + mkdir -p /vagrant/deploy/state/registry provisioner: + start_registry provisioner: + docker-compose -f /vagrant/deploy/docker-compose.yml up --build -d registry provisioner: + check_container_status registry provisioner: + local container_name=registry provisioner: + local container_id provisioner: ++ docker-compose -f /vagrant/deploy/docker-compose.yml ps -q registry provisioner: + container_id= provisioner: + local start_moment provisioner: + local current_status provisioner: ++ docker inspect '' --format '{{ .State.StartedAt }}' provisioner: Error: No such object: provisioner: + start_moment= provisioner: + finish provisioner: + rm -rf /tmp/tmp.ve3XJ7qtgA ``` Notice that `container_id` is empty. This turns out to be because `docker-compose` is an empty file! ``` vagrant@provisioner:/vagrant/deploy$ docker-compose up --build registry vagrant@provisioner:/vagrant/deploy$ which docker-compose /usr/local/bin/docker-compose vagrant@provisioner:/vagrant/deploy$ docker-compose -h vagrant@provisioner:/vagrant/deploy$ file /usr/local/bin/docker-compose /usr/local/bin/docker-compose: empty ``` So with the following test patch: ```diff diff --git a/deploy/vagrant/scripts/tinkerbell.sh b/deploy/vagrant/scripts/tinkerbell.sh index 915f27f..dcb379c 100644 --- a/deploy/vagrant/scripts/tinkerbell.sh +++ b/deploy/vagrant/scripts/tinkerbell.sh @@ -34,6 +34,14 @@ setup_nat() ( main() ( export DEBIAN_FRONTEND=noninteractive + local name=docker-compose-$(uname -s)-$(uname -m) + local url=https://github.com/docker/compose/releases/download/1.26.0/$name + curl -fsSLO "$url" + curl -fsSLO "$url.sha256" + sha256sum -c <"$name.sha256" + chmod +x "$name" + sudo mv "$name" /usr/local/bin/docker-compose + if ! [[ -f ./.env ]]; then ./generate-env.sh eth1 >.env fi ``` We can try again and we're back to a working state: ``` $ vagrant destroy -f; vagrant up provisioner ==> worker: Domain is not created. Please run `vagrant up` first. ==> provisioner: Domain is not created. Please run `vagrant up` first. Bringing machine 'provisioner' up with 'libvirt' provider... ==> provisioner: Checking if box 'tinkerbelloss/sandbox-ubuntu1804' version '0.1.0' is up to date... ==> provisioner: Creating image (snapshot of base box volume). ==> provisioner: Creating domain with the following settings... ... provisioner: + setup_docker_registry provisioner: + local registry_images=/vagrant/deploy/state/registry provisioner: + [[ -d /vagrant/deploy/state/registry ]] provisioner: + mkdir -p /vagrant/deploy/state/registry provisioner: + start_registry provisioner: + docker-compose -f /vagrant/deploy/docker-compose.yml up --build -d registry provisioner: Creating network "deploy_default" with the default driver provisioner: Creating volume "deploy_postgres_data" with default driver provisioner: Building registry provisioner: Step 1/7 : FROM registry:2.7.1 ... provisioner: Successfully tagged deploy_registry:latest provisioner: Creating deploy_registry_1 ... Creating deploy_registry_1 ... done provisioner: + check_container_status registry provisioner: + local container_name=registry provisioner: + local container_id provisioner: ++ docker-compose -f /vagrant/deploy/docker-compose.yml ps -q registry provisioner: + container_id=2e3d9557fd4c0d7f7e1c091b957a0033d23ebb93f6c8e5cdfeb8947b2812845c ... provisioner: + sudo -iu vagrant docker login --username=admin --password-stdin 192.168.1.1 provisioner: WARNING! Your password will be stored unencrypted in /home/vagrant/.docker/config.json. provisioner: Configure a credential helper to remove this warning. See provisioner: https://docs.docker.com/engine/reference/commandline/login/#credentials-store provisioner: Login Succeeded provisioner: + set +x provisioner: NEXT: 1. Enter /vagrant/deploy and run: source ../.env; docker-compose up -d provisioner: 2. Try executing your fist workflow. provisioner: Follow the steps described in https://tinkerbell.org/examples/hello-world/ to say 'Hello World!' with a workflow. ``` :toot: Except that my results are not due to the way docker-compose is being installed at all. After still running into this issue when using a box built with the new install method I was still seeing empty docker-compose files. I ran a bunch of experiments to try and figure out what is going on. The issue is strictly in vagrant-libvirt since vagrant-virtualbox works fine. Turns out data isn't being flushed back to disk at shutdown. Both calling `sync` or writing multiple copies of the binary to the fs (3x at least) ended up working. Then I was informed of a known vagrant-libvirt issue which matches this behavior, https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1013! Fixes #59 Signed-off-by: Manuel Mendez <mmendez@equinix.com> |
||
---|---|---|
.github/workflows | ||
cmd | ||
deploy | ||
script | ||
test/vagrant | ||
.envrc | ||
.gitattributes | ||
.gitignore | ||
.mergify.yml | ||
current_versions.sh | ||
generate-env.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
README.md | ||
setup.sh | ||
shell.nix |
This repository is a quick way to get the Tinkerbell stack up and running.
Currently it supports:
- Vagrant with libvirt and VirtualBox
- Terraform on Packet
Tinkerbell is made of different components: osie, boots, tink-server, tink-worker and so on. Currently they are under heavy development and we are working around the release process for all the components.
We need a way to serve a version of Tinkerbell that you can use and we know what is running the hood. Sandbox runs a pinned version for all the components via commit sha. In this way as a user you won't be effected (ideally) from new code that will may change a bit how Tinkerbell works.
We are keeping the number of breaking changes as low as possible but in the current state they are expected.
Binary release
As part of a new release for sandbox we want to push binaries to GitHub Release in this way the community will be able to use them if needed.
We build Docker images across many architectures, each of them in its own repository: boots, hegel, tink and so on.
Sandbox is just a collection of those services and we follow the same pattern for getting binaries as well.
There is a go program available in ./cmd/getbinariesfromquay/main.go
. You can
run it with go run
or build it with go build
:
$ go run cmd/getbinariesfromquay/main.go -h
-binary-to-copy string
The location of the binary you want to copy from inside the image. (default "/usr/bin/hegel")
-image string
The image you want to download binaries from. It has to be a multi stage image. (default "docker://quay.io/tinkerbell/hegel")
-out string
The directory that will be used to store the release binaries (default "./out")
-program string
The name of the program you are extracing binaries for. (eg tink-worker, hegel, tink-server, tink, boots) (default "hegel")
By default it uses the image running on Quay for Hegel and it gets the binary
/usr/bin/hegel
from there. The directory ./out
is used to store images and
binaries inside ./out/releases
.
To get the binaries for example for boots you can run:
$ go run cmd/getbinariesfromquay/main.go \
-binary-to-copy /usr/bin/boots \
-image docker://quay.io/tinkerbell/boots:sha-9625559b \
-program boots
You will find them in ./out/release