Using Vagrant to provision machines
Introduction
In the first post in this series (Setting up a Consul cluster for testing and development with Vagrant (Part 1)) we looked at the Vagrant files and associated files required to automatically provision a Consul cluster for testing and development. We didn’t get as far as actually provisioning any machines, which is what we will do here. If you haven’t read the previous post it’s probably a good idea to do so before going any further with this one.
All the files we created in the previous post are available on BitBucket.
Provisioning the Consul servers
If you recall from the previous post the Vagrant file defined 4 machines: 3 machines hosting Consul servers and another machine hosting a Consul client which runs the Consul Web UI.
I’m using CygWin so the first step is to change to the consul-cluster folder I created to contain all my working files, including the Vagrantfile. To provision all the machines in one go you can simply run the following command:
vagrant upHowever, I don’t want to provision all the machines in one go. I want to use ConEmu to create a tab for each machine. To provision the machines one at a time you can simply pass in the name of the box to provision. For example:
vagrant up consul1
If all goes well you should see Vagrant provision the machine. This means Vagrant will download the box image (hashicorp/precise64) and run the provisioner which in this case is a Shell provisioner that runs the provision.sh script.
The last thing in the provisioning script is starting the Consul agent in bootstrap mode. You should end up with something like this:
That’s it! You now have a virtual machine up-and-running with the Consul agent running in bootstrap mode.
You can now repeat this process for the remaining servers and the client. I open a new tab for each machine in ConEmu.
As each new instance is started you can see it joining the cluster:
Back on the consul1 instance you will see the new Consul instance joining the cluster too:
It’s just a case of repeating the process for the consul3 instance to get the completed server cluster up-and-running.
Provisioning the Consul client
You provision the Consul client in exactly the same way as the server instances:vagrant up consulclient
The client will join the cluster just like the servers. Once it’s up-and-running you will be able to access the Consul Web UI with a browser from your host workstation (go to http://172.20.20.40:8500/ui/). You should see something like this:
Excellent! Note there’s a data center ‘DC1’ listed top-right. If you were observant you’d have noticed we gave each Consul instance a data center in the config.json files. This is reflected in the Consul Web UI.
{ "bootstrap": true, "server": true, "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "Dt3P9SpKGAR/DIUN1cDirg==", "log_level": "INFO", "enable_syslog": true, "bind_addr": "172.20.20.10", "client_addr": "172.20.20.10" }
Halting a virtual machine
To halt a virtual machine you just need to have a command prompt (CygWin in my case) open in the directory containing the Vagrantfile and type “vagrant halt” followed by the name of the instance to stop. For example:vagrant halt consul3
Once the instance has halted you should see this reflected in the Consul Web UI.
If you want to halt all virtual machines in one go just type “vagrant halt” but don’t specify a machine name. Vagrant will iterate through all the virtual machines you have defined in the Vagrantfile and halt each one in turn.
Restarting a virtual machine
If you halt a virtual machine you can easily bring it back up again by typing “vagrant up” followed by the machine name. However, when you do this you’ll notice something different; the provisioner – and therefore provision.sh - doesn’t get run.
This makes perfect sense because the machine has already been provisioned, we’re just restarting it.
Never fear, Consul will be running because of the upstart script we created. We can check that by connecting to the virtual machine with SSH.
Connecting to a virtual machine with SSH
We can connect to a virtual machine with SSH by using another Vagrant command, “vagrant ssh” followed by the machine name:vagrant ssh consul1
This connects you too the machine using the ‘vagrant’ user that is automatically created for you. We can now verify that the Consul agent is up-and-running:
Forcing the provisioner to be run again
If you want to restart a virtual machine that has already been provisioned but you want to force the provisioning step to be rerun you have a few of options including passing in the “—provision” argument to “vagrant up”.
Refer to the Vagrant documentation for details.
Destroying virtual machines
One of the joys of using something like Vagrant is knowing you can completely remove a virtual machine from your system and be able to easily recreate it later, safe in the knowledge that it will be provisioned and configured exactly the same each time.
To completely remove a virtual machine from your system type “vagrant destroy” followed by the name of the machine to remove. As with most Vagrant commands if you omit the machine name Vagrant will iterate through all of the machines defined in the Vagrantfile and destroy them all.
Wrapping up
So that’s it for this brief introduction to using Vagrant to provision a Consul cluster for testing and development. Don’t forget, the source files can be found on BitBucket.