## Description
Allows for deploying the vagrant/libvirt setup without NAT and with multiple workers, which enables testing with cluster-api-provider-tink
## Why is this needed
Helps with testing CAPT
## How Has This Been Tested?
Currently testing at the moment, but all testing will consist of manual testing with vagrant/libvirt
## How are existing users impacted? What migration steps/scripts do we need?
This could affect existing vagrant/libvirt users if they have an existing worker running when they update, not sure if there is a good way to avoid that, though.
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
## Description
Using a custom endpoint instead of using the default endpoint that `hegel` adjusts
<!--- Please describe what this PR is going to change -->
## Why is this needed
The reason behind that is, while using the sandbox in combination with tinkerbell's example [workflows](https://github.com/tinkerbell/workflows). The [functions.sh](https://github.com/tinkerbell/workflows/blob/master/ubuntu_18_04/00-base/functions.sh) will fail due to lack of information in the retrieved metadata from `hegel`. The default endpoint filters out all the needed metadata such as `plan_slug`. This PR removes that filtration criteria.
Also I am not sure, I think we are safe to provide these info(the full hardware spec) to the worker while using the Sandbox setup, as this is mainly used as an example setup not as a production one.
<!--- Link to issue you have raised -->
Fixes: #64
## How Has This Been Tested?
<!--- Please describe in detail how you tested your changes. -->
<!--- Include details of your testing environment, and the tests you ran to -->
<!--- see how your change affects other areas of the code, etc. -->
Yes, it was tested locally by setting the env var in the docker-compose.yml file and batch it in the sandbox setup.
## How are existing users impacted? What migration steps/scripts do we need?
<!--- Fixes a bug, unblocks installation, removes a component of the stack etc -->
<!--- Requires a DB migration script, etc. -->
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
Apparently the idea to prefix a package with an underscore is not that
smart as I thought. Yes `go test` does not run it by default when you
run `go test ./...` but also other commands like `go mod tidy` do not
work consistently.
Nothing changes in practice. By default only unit tests run. Setting the
new environment variable: `TEST_WITH_VAGRANT` you include the test who
uses vagrant.
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
## Description
Mounting the `current_versions.sh` file to the target provisioner when installing TinkerBell on Equinix using terraform
<!--- Please describe what this PR is going to change -->
## Why is this needed
Because the `generate-envrc.sh` will fail and no TinkerBell env will be created.
<!--- Link to issue you have raised -->
Fixes: #60
## How Has This Been Tested?
<!--- Please describe in detail how you tested your changes. -->
<!--- Include details of your testing environment, and the tests you ran to -->
<!--- see how your change affects other areas of the code, etc. -->
Simply ran the provisioner on Equinix and the env file was created with all the needed info.
## How are existing users impacted? What migration steps/scripts do we need?
None
<!--- Fixes a bug, unblocks installation, removes a component of the stack etc -->
<!--- Requires a DB migration script, etc. -->
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
I am not sure when it happens, it can be when we removed the NGINX_IP,
or when we checked that every services were using ports OR network_mode
but we exposed nginx and boots over the same port.
This commit fixes that.
I am not sure when it happens, it can be when we removed the NGINX_IP,
or when we checked that every services were using ports OR network_mode
but we exposed nginx and boots over the same port.
This commit fixes that.
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
… container definition
## Description
Resolves#53
## Why is this needed
This conflict causes container creation to fail.
Fixes: #
## How Has This Been Tested?
I ran the setup and was able to run a workflow and deployment without issue.
## How are existing users impacted? What migration steps/scripts do we need?
No impact.
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
This commit contains a new utility that helps to automate a version bump
for sandbox.
You can run this command to get the vibe of what it does.
```
$ go run cmd/bump-version/main.go -help
```
In order to try it out you can run this command form sandbox root. By
default it won't overwrite anything. It will print to stdout a new
version of the current_versions.sh file where all the images are
calculate cloning the various repositories
```
$ go run cmd/bump-version/main.go
```
If you want to overwrite the current_versions file you can use the flag
`-overwrite`.
More will come but for now, that's the PoC. Ideally this can be hooked
to CI/CD and run periodically, opening a PR that can be evaluated and
merged.
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
## Description
Updates [Packet Terraform](https://docs.tinkerbell.org/setup/packet-terraform/) plan to use the Equinix Metal provider.
## Why is this needed
Consistent with rebranding efforts across the organization.
Fixes: #
## How Has This Been Tested?
This plan validates, and applies as expected (and has previously) with the renamed resources, and updates outputs.
## How are existing users impacted? What migration steps/scripts do we need?
Existing users may need to reinitialize their Terraform environment, but existing resources in state can be imported.
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
This PR contains a provisioning mechanism for the Vagrant boxes we ship
as part of Sandbox.
In order to self contain and distribute the required dependencies for Tinkerbell
and Sandbox without having to download all of them runtime we decided to use
[Packer.io](https://packer.io) to build boxes that you can use when provisioning
Tinkerbell on Vagrant.
Currently the generated boxes are available via [Vagrant
Cloud](https://app.vagrantup.com/tinkerbelloss).
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
Signed-off-by: Michael Richard <michael.richard.ing@gmail.com>
## Description
This configures NGINX to listen on port 8080 and lets go the need to configure a second IP address on the host dedicated to NGINX.
## Why is this needed
Setting up a second IP address to host NGINX on the same host is not always easy, especially when running tinkerbell on network devices like switches. The second IP address adds a useless level of complexity. In the future, all the code required to identify the host operating system and configure the IP address could even be removed and left as a prerequisite, since the host is likely to have an IP address already configured.
## How Has This Been Tested?
The untouched vagrant_test.go test ran sucessfully.
## How are existing users impacted? What migration steps/scripts do we need?
Simply re-applying the docker-compose.yml should be sufficient (untested).
Additional firewall rules to allow traffic on port 8080 could be required depending on user's network configuration.
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
## Description
Fixes the vagrant configuration when using libvirt
## Why is this needed
Without this issue the vagrant provisioner fails when using libvirt with the following error:
```sh
Error occurred while creating new network: {:iface_type=>:private_network, :netmask=>"255.255.255.0", :dhcp_enabled=>false, :forward_mode=>"none", :virtualbox__intnet=>"tink_network", :libvirt__dhcp_enabled=>false, :libvirt__forward_mode=>"none", :auto_config=>false, :protocol=>"tcp", :id=>"18e6fc6d-41b8-40c9-814d-ffc476bfd920"}.
```
## How Has This Been Tested?
Reran vagrant up after making the changes and the provisioner machine successfully was created
## How are existing users impacted? What migration steps/scripts do we need?
Should not affect existing users unless they were trying to use vagrant/libvirt and were unsuccessful before.
No migration should be needed.
Commit b504810 introduced a NAT to make worker capable of reaching the
public internet via the provisioner.
But it also introduced a bug, it only works for the Vagrant setup as
Manny pointed out:
https://github.com/tinkerbell/sandbox/pull/33#issuecomment-759651035
This is an attempt to fix it
@mmlb I would like to avoid additional conditions as part of the
setup.sh, we have already too many of them and they are not even easy to
dsicover. We have different entrypoint for those environment let's use them.
Commit b504810 introduced a NAT to make worker capable of reaching the
public internet via the provisioner.
But it also introduced a bug, it only works for the Vagrant setup as
Manny pointed out:
https://github.com/tinkerbell/sandbox/pull/33#issuecomment-759651035
This is an attempt to fix it
Signed-off-by: Gianluca Arbezzano <gianarb92@gmail.com>
## Description
Update boots version.
## Why is this needed
This will get us proper binaries in the releases but also 64bit boots
for 64bit x86 machines!
## How Has This Been Tested?
Boots has been tested on EM hw.
## Description
Renames binaries to be more consistent in and of itself and also compared to other Go projects that provide multi-arch binaries.
## Why is this needed
@gianarb asked me to rename the binaries in https://github.com/tinkerbell/boots/pull/122 to match this scheme, but I think that this PR is the better directon.
This naming scheme seemed weird to me so I went looking around at other
Go projects. None of the projects that I found that had multi-arch
release binaries used this scheme, instead they just append the variant
to arch. Appending the variant to the arch also makes a lot of sense if
you think of the naming schme as $binary-$os-$cpu and
$cpu=$arch$variant. Keeping arch and variant together as $cpu is also
more consistent, and consitency is great :D.
Signed-off-by: Manuel Mendez <mmendez@equinix.com>
## Description
Documentation
## Why is this needed
This statement is confusing, I needed to log into the community slack to get clarification.
Fixes: #
## How Has This Been Tested?
This is a documentation change and thus will not impact any software in this project.
## How are existing users impacted? What migration steps/scripts do we need?
They are not, newer users may find this a little easier to digest.
## Checklist:
I have:
- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade