This makes the deploy directory cleaner by moving all
compose related file/directories into the compose directory.
Signed-off-by: Jacob Weinstock <jakobweinstock@gmail.com>
This gets the refactored sandbox back on par with
the existing sandbox for vagrant-libvirt functionality.
Signed-off-by: Jacob Weinstock <jakobweinstock@gmail.com>
Only 2 main Vagrant calls are now needed (`vagrant up` and `vagrant up machine1`).
This PR only updates the Vagrant Virtualbox setup. The Vagrant Libvirt and Terraform
still need to be updated.
This uses docker-compose as the entry point for standing up the stack and makes the stand-up
of the sandbox more portal. Vagrant and Terraform are only responsible for standing up infrastructure
and then running docker-compose, not for running any glue scripts.
The docker-compose calls out to single-shot services to do all the glue required to get the fully
functional Tinkerbell stack up and running. All the single-shot services are idempotent.
This increases portability and the development iteration loop. This also simplifies the required
steps needed to get a fully functioning sandbox up and running.
This is intended to help people looking to get started by getting them to a provisioned
machine quicker and more easily.
Signed-off-by: Jacob Weinstock <jakobweinstock@gmail.com>
This fixes the vagrant based sandbox from not working. This was particularly
annoying to track down because of not having `set -x` in `setup.sh` but
what looks like xtrace output in stderr. The xtrace output on stderr
was actually from the `generate_certificates` container:
```
provisioner: 2021/04/26 21:22:32 [INFO] signed certificate with serial number 142120228981443865252746731124927082232998754394
provisioner: + cat
provisioner: server.pem
provisioner: ca.pem
provisioner: + cmp
provisioner: -s
provisioner: bundle.pem.tmp
provisioner: bundle.pem
provisioner: + mv
provisioner: bundle.pem.tmp
provisioner: bundle.pem
provisioner: Error: No such object:
==> provisioner: Clearing any previously set forwarded ports...
==> provisioner: Removing domain...
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
```
I ended up doubting the `if ! cmp` blocks until I added `set -euxo pipefail` and
the issue was pretty obviously in docker-compose land.
```
$ vagrant destroy -f; vagrant up provisioner
==> worker: Domain is not created. Please run `vagrant up` first.
==> provisioner: Domain is not created. Please run `vagrant up` first.
Bringing machine 'provisioner' up with 'libvirt' provider...
==> provisioner: Checking if box 'tinkerbelloss/sandbox-ubuntu1804' version '0.1.0' is up to date...
==> provisioner: Creating image (snapshot of base box volume).
==> provisioner: Creating domain with the following settings...
...
provisioner: 2021/04/27 18:20:13 [INFO] signed certificate with serial number 138080403356863347716407921665793913032297783787
provisioner: + cat server.pem ca.pem
provisioner: + cmp -s bundle.pem.tmp bundle.pem
provisioner: + mv bundle.pem.tmp bundle.pem
provisioner: + local certs_dir=/etc/docker/certs.d/192.168.1.1
provisioner: + cmp --quiet /vagrant/deploy/state/certs/ca.pem /vagrant/deploy/state/webroot/workflow/ca.pem
provisioner: + cp /vagrant/deploy/state/certs/ca.pem /vagrant/deploy/state/webroot/workflow/ca.pem
provisioner: + cmp --quiet /vagrant/deploy/state/certs/ca.pem /etc/docker/certs.d/192.168.1.1/tinkerbell.crt
provisioner: + [[ -d /etc/docker/certs.d/192.168.1.1/ ]]
provisioner: + cp /vagrant/deploy/state/certs/ca.pem /etc/docker/certs.d/192.168.1.1/tinkerbell.crt
provisioner: + setup_docker_registry
provisioner: + local registry_images=/vagrant/deploy/state/registry
provisioner: + [[ -d /vagrant/deploy/state/registry ]]
provisioner: + mkdir -p /vagrant/deploy/state/registry
provisioner: + start_registry
provisioner: + docker-compose -f /vagrant/deploy/docker-compose.yml up --build -d registry
provisioner: + check_container_status registry
provisioner: + local container_name=registry
provisioner: + local container_id
provisioner: ++ docker-compose -f /vagrant/deploy/docker-compose.yml ps -q registry
provisioner: + container_id=
provisioner: + local start_moment
provisioner: + local current_status
provisioner: ++ docker inspect '' --format '{{ .State.StartedAt }}'
provisioner: Error: No such object:
provisioner: + start_moment=
provisioner: + finish
provisioner: + rm -rf /tmp/tmp.ve3XJ7qtgA
```
Notice that `container_id` is empty. This turns out to be because
`docker-compose` is an empty file!
```
vagrant@provisioner:/vagrant/deploy$ docker-compose up --build registry
vagrant@provisioner:/vagrant/deploy$ which docker-compose
/usr/local/bin/docker-compose
vagrant@provisioner:/vagrant/deploy$ docker-compose -h
vagrant@provisioner:/vagrant/deploy$ file /usr/local/bin/docker-compose
/usr/local/bin/docker-compose: empty
```
So with the following test patch:
```diff
diff --git a/deploy/vagrant/scripts/tinkerbell.sh b/deploy/vagrant/scripts/tinkerbell.sh
index 915f27f..dcb379c 100644
--- a/deploy/vagrant/scripts/tinkerbell.sh
+++ b/deploy/vagrant/scripts/tinkerbell.sh
@@ -34,6 +34,14 @@ setup_nat() (
main() (
export DEBIAN_FRONTEND=noninteractive
+ local name=docker-compose-$(uname -s)-$(uname -m)
+ local url=https://github.com/docker/compose/releases/download/1.26.0/$name
+ curl -fsSLO "$url"
+ curl -fsSLO "$url.sha256"
+ sha256sum -c <"$name.sha256"
+ chmod +x "$name"
+ sudo mv "$name" /usr/local/bin/docker-compose
+
if ! [[ -f ./.env ]]; then
./generate-env.sh eth1 >.env
fi
```
We can try again and we're back to a working state:
```
$ vagrant destroy -f; vagrant up provisioner
==> worker: Domain is not created. Please run `vagrant up` first.
==> provisioner: Domain is not created. Please run `vagrant up` first.
Bringing machine 'provisioner' up with 'libvirt' provider...
==> provisioner: Checking if box 'tinkerbelloss/sandbox-ubuntu1804' version '0.1.0' is up to date...
==> provisioner: Creating image (snapshot of base box volume).
==> provisioner: Creating domain with the following settings...
...
provisioner: + setup_docker_registry
provisioner: + local registry_images=/vagrant/deploy/state/registry
provisioner: + [[ -d /vagrant/deploy/state/registry ]]
provisioner: + mkdir -p /vagrant/deploy/state/registry
provisioner: + start_registry
provisioner: + docker-compose -f /vagrant/deploy/docker-compose.yml up --build -d registry
provisioner: Creating network "deploy_default" with the default driver
provisioner: Creating volume "deploy_postgres_data" with default driver
provisioner: Building registry
provisioner: Step 1/7 : FROM registry:2.7.1
...
provisioner: Successfully tagged deploy_registry:latest
provisioner: Creating deploy_registry_1 ...
Creating deploy_registry_1 ... done
provisioner: + check_container_status registry
provisioner: + local container_name=registry
provisioner: + local container_id
provisioner: ++ docker-compose -f /vagrant/deploy/docker-compose.yml ps -q registry
provisioner: + container_id=2e3d9557fd4c0d7f7e1c091b957a0033d23ebb93f6c8e5cdfeb8947b2812845c
...
provisioner: + sudo -iu vagrant docker login --username=admin --password-stdin 192.168.1.1
provisioner: WARNING! Your password will be stored unencrypted in /home/vagrant/.docker/config.json.
provisioner: Configure a credential helper to remove this warning. See
provisioner: https://docs.docker.com/engine/reference/commandline/login/#credentials-store
provisioner: Login Succeeded
provisioner: + set +x
provisioner: NEXT: 1. Enter /vagrant/deploy and run: source ../.env; docker-compose up -d
provisioner: 2. Try executing your fist workflow.
provisioner: Follow the steps described in https://tinkerbell.org/examples/hello-world/ to say 'Hello World!' with a workflow.
```
:toot:
Except that my results are not due to the way docker-compose is being installed
at all. After still running into this issue when using a box built with the new
install method I was still seeing empty docker-compose files. I ran a bunch of
experiments to try and figure out what is going on. The issue is strictly
in vagrant-libvirt since vagrant-virtualbox works fine. Turns out data isn't
being flushed back to disk at shutdown. Both calling `sync` or writing multiple
copies of the binary to the fs (3x at least) ended up working. Then I was informed
of a known vagrant-libvirt issue which matches this behavior, https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1013!
Fixes#59
Signed-off-by: Manuel Mendez <mmendez@equinix.com>
The tinkerbell.sh script ends up doing some other work after
calling setup.sh and has set -x enabled so the whats_next message
is likely to be missed. So now save it for later reading as the last
thing done.
Signed-off-by: Manuel Mendez <mmendez@equinix.com>
Both [[ ]] and (( )) bashisms are better than the alternative
in POSIX sh, since they are builtin and don't suffer from quoting
or number-of-args issues.
Signed-off-by: Manuel Mendez <mmendez@equinix.com>
Commit b504810 introduced a NAT to make worker capable of reaching the
public internet via the provisioner.
But it also introduced a bug, it only works for the Vagrant setup as
Manny pointed out:
https://github.com/tinkerbell/sandbox/pull/33#issuecomment-759651035
This is an attempt to fix it
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
* Change ENV var check to only validate the existence of the
var in the local env
* Add VAGRANT_WORKER_SCALE env variable override to control
GUI scaling for virtualbox
Signed-off-by: James W. Brinkerhoff <jwb@paravolve.net>
Tinkerbell is made of different components as we all know at this point.
Sandbox had those versions all over the places. This PR moves them as
part of the `envrc` file.
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>