Local development environments with Terraform + LXD
Jun 1, 2023
Never miss our publications about Open Source, big data and distributed systems, low frequency of one email every two months.
As a Big Data Solutions Architect and InfraOps, I need development environments to install and test software. They have to be configurable, flexible, and performant. Working with distributed systems, the best-fitting setups for this use case are local virtualized clusters of several Linux instances.
For a few years, I have been using HashiCorp’s Vagrant to manage libvirt/KVM instances. This is working well, but I recently experienced another setup that works better for me: LXD to manage instances and Terraform (another HashiCorp tool) to operate LXD. In this article, I explain what the advantages of the latter are and how to setup such an environment.
Glossary
Vagrant and Terraform
Vagrant enables users to create and configure lightweight, reproducible, and portable development environments. It is mostly used to provision virtual machines locally.
Terraform is a widely used Infrastructure as Code tool that allows provisioning resources on virtually any cloud. It supports many providers from public cloud (AWS, Azure, GCP) to private self-hosted infrastructure (OpenStack, Kubernetes, and LXD of course). With Terraform, InfraOps teams apply GitOps best practices to manage their infrastructure.
Linux virtualization/containerization
Here is a quick review of the various tools (and acronyms) used in this article and composing the crowded Linux virtualization/containerization ecosystem:
-
Virtual machines tools:
- QEMU (Quick EMUlator): an emulation and virtualization tool that acts as a VM hypervisor, instead of Virtualbox for example.
- KVM (Kernel Virtual Machine): a Linux kernel module that leverages hardware virtualization, notably special CPU features made specifically for virtualization (e.g. Intel VT).
- libvirt: a virtualization API that supports several VM hypervisors and simplifies the management of VMs from any programing language.
-
Linux containers tools (see LXD: The Missing Piece for more about Linux containers):
- LXC (LinuX Containers): an interface to create and manage system or application containers
- LXD (LinuX container Daemon): a system container and VM manager that offers a unified user experience around full Linux systems. It operates on top of LXC (for containers) and QEMU (for VMs) to manage machines hosted by an individual host or by a cluster of federated hosts.
- LXC (the other one): the CLI for LXD
Having Vagrant operating KVM hosts is achieved with the vagrant-libvirt provider. See KVM machines for Vagrant on Archlinux for how to setup libvirt/KVM with Vagrant.
Why Terraform?
LXD is used in CLI with the lxc
command to manage it’s resources (containers and VMs, networks, storage pools, instance profiles). Being a command-based tool, it is by nature not Git friendly.
Fortunately, there is a Terraform provider to manage LXD: terraform-provider-lxd. This enables versioning LXD infrastructure configuration alongside application code.
Note: Another tool to operate LXD could be Canonical’s Juju, but it seems a bit more complex to learn.
Why Terraform + LXD? Advantages over Vagrant + libvirt/KVM
Live resizing of instances
Linux containers are more flexible than VMs, which allows resizing instances without reboot. This is a very convenient feature.
Unified tooling from development to production
LXD can be installed on multiple hosts to make a cluster that can be used as the base layer of a self-hosted cloud. The Terraform + LXD couple can thus be used to manage local, integration, and production environments. This significantly eases testing and deploying infrastructure configurations.
LXD support in Ansible
To install and configure software on the local instances, I often use Ansible. There are several connection plugins available to Ansible to connect to the target hosts, the main one being ssh
.
When provisioning LXC instances we can use the standard ssh
plugin but also a native LXC plugin: lxc
(which uses the LXC Python library) or lxd
(which uses the LXC CLI). This is beneficial for two reasons:
- For security as we don’t have to start an OpenSSH server and open the SSH port on our instances
- For simplicity as we don’t have to manage SSH keys for Ansible
Configuration changes preview
One of the main features of Terraform is the ability to preview the changes that a command would apply. This avoids unwanted configuration deployments and command errors.
Example with an LXD instance profile’s resizing:
$ terraform plan
...
Terraform will perform the following actions:
# lxd_profile.tdp_profiles["tdp_edge"] will be updated in-place
~ resource "lxd_profile" "tdp_profiles" {
~ config = {
~ "limits.cpu" = "1" -> "2"
~ "limits.memory" = "1GiB" -> "2GiB"
}
id = "tdp_edge"
name = "tdp_edge"
}
Plan: 0 to add, 1 to change, 0 to destroy.
Configuration readability and modularity
The Terraform language is declarative. It describes an intended goal rather than the steps to reach that goal. As such, it is more readable than the Ruby language used in Vagrant files. Also because Terraform parses all files in the current directory and allows defining modules with inputs and outputs, we can very easily split the configuration to increase maintainability.
# Using multiple Terraform config files:
$ ls -1 | grep -P '.tf(vars)?$'
local.auto.tfvars
main.tf
outputs.tf
provider.tf
terraform.tfvars
variables.tf
Performance gain
Using Terraform + LXD speeds up daily operations in local development environments which is always enjoyable.
Here is a performance benchmark when operating a local development cluster with the following specs:
- Host OS: Ubuntu 20.04
- Number of guest instances: 7
- Resources allocated: 24GiB of RAM and 24 vCPUs
Metric | Vagrant + libvirt/KVM | Terraform + LXD | Performance gain |
---|---|---|---|
Cluster creation (sec) | 56.5 | 51 | 1.1x faster |
Cluster startup (sec) | 36.5 | 6 | 6x faster |
Cluster shutdown (sec) | 46 | 13.5 | 3.4x faster |
Cluster destroy (sec) | 9 | 17 | 2x slower |
Setup of a minimal Terraform + LXD environment
Now let’s try to setup a minimal Terraform + LXD environment.
Prerequisites
Your computer needs:
- LXD (see Installation)
- Terraform >= 0.13 (see Install Terraform)
- Linux cgroup v2 (to run recent Linux containers like Rocky 8)
- 5 GB of RAM available
Also create a directory to work from:
mkdir terraform-lxd-xs
cd terraform-lxd-xs
Linux cgroup v2
To check if your host uses cgroup v2, run:
stat -fc %T /sys/fs/cgroup
# cgroup2fs => cgroup v2
# tmpfs => cgroup v1
Recent distributions use cgroup v2 by default (check the list here) but the feature is available on all hosts that run a Linux kernel >= 5.2 (e.g. Ubuntu 20.04). To enable it, see Enabling cgroup v2.
Terraform provider
We will use the terraform-lxd/lxd
Terraform provider to manage our LXD resources.
Create provider.tf
:
terraform {
required_providers {
lxd = {
source = "terraform-lxd/lxd"
version = "1.7.1"
}
}
}
provider "lxd" {
generate_client_certificates = true
accept_remote_certificate = true
}
Variables definition
It is a good practice to allow users to configure the Terraform environment through input variables. We enforce the variable correctness by declaring their expected types.
Create variables.tf
:
variable "xs_storage_pool" {
type = object({
name = string
source = string
})
}
variable "xs_network" {
type = object({
ipv4 = object({
address = string
})
})
}
variable "xs_profiles" {
type = list(object({
name = string
limits = object({
cpu = number
memory = string
})
}))
}
variable "xs_image" {
type = string
default = "images:rocky/8"
}
variable "xs_containers" {
type = list(object({
name = string
profile = string
ip = string
}))
}
The following variables are defined:
xs_storage_pool
: the LXD storage pool storing the disks of our containersxs_network
: the LXD IPv4 network used by containers to communicate within a shared networkxs_profiles
: the LXD profiles created for our containers. Profiles allow the definition of a set of properties that can be applied to any container.xs_image
: the LXD image. This essentially specifies which OS the containers run.xs_containers
: The LXD instances to create.
Main
The main Terraform file defines all the resources configured through the variables. This file is not modified very often by developers after its first implementation for the project.
Create main.tf
:
# Storage pools
resource "lxd_storage_pool" "xs_storage_pool" {
name = var.xs_storage_pool.name
driver = "dir"
config = {
source = "${path.cwd}/${path.module}/${var.xs_storage_pool.source}"
}
}
# Networks
resource "lxd_network" "xs_network" {
name = "xsbr0"
config = {
"ipv4.address" = var.xs_network.ipv4.address
"ipv4.nat" = "true"
"ipv6.address" = "none"
}
}
# Profiles
resource "lxd_profile" "xs_profiles" {
depends_on = [
lxd_storage_pool.xs_storage_pool
]
for_each = {
for index, profile in var.xs_profiles :
profile.name => profile.limits
}
name = each.key
config = {
"boot.autostart" = false
"limits.cpu" = each.value.cpu
"limits.memory" = each.value.memory
}
device {
type = "disk"
name = "root"
properties = {
pool = var.xs_storage_pool.name
path = "/"
}
}
}
# Containers
resource "lxd_container" "xs_containers" {
depends_on = [
lxd_network.xs_network,
lxd_profile.xs_profiles
]
for_each = {
for index, container in var.xs_containers :
container.name => container
}
name = each.key
image = var.xs_image
profiles = [
each.value.profile
]
device {
name = "eth0"
type = "nic"
properties = {
network = lxd_network.xs_network.name
"ipv4.address" = "${each.value.ip}"
}
}
}
The following resources are created by Terraform:
lxd_network.xs_network
: the network for all our instanceslxd_profile.xs_profiles
: several profiles that can be defined by the userlxd_container.xs_containers
: the instances’ definitions (including the application of the profile and the network device attachment)
Variables file
Finally, we provide Terraform with the variables specific to our environment. We use the auto.tfvars
extension to automatically load the variables when terraform
is run.
Create local.auto.tfvars
:
xs_storage_pool = {
name = "xs_storage_pool"
source = "lxd-xs-pool"
}
xs_network = {
ipv4 = { address = "192.168.42.1/24" }
}
xs_profiles = [
{
name = "xs_master"
limits = {
cpu = 1
memory = "1GiB"
}
},
{
name = "xs_worker"
limits = {
cpu = 2
memory = "2GiB"
}
}
]
xs_image = "images:rockylinux/8"
xs_containers = [
{
name = "xs-master-01"
profile = "xs_master"
ip = "192.168.42.11"
},
{
name = "xs-master-02"
profile = "xs_master"
ip = "192.168.42.12"
},
{
name = "xs-worker-01"
profile = "xs_worker"
ip = "192.168.42.21"
},
{
name = "xs-worker-02"
profile = "xs_worker"
ip = "192.168.42.22"
},
{
name = "xs-worker-03"
profile = "xs_worker"
ip = "192.168.42.23"
}
]
Environment provisioning
Now we have all the files needed to provision our environment:
# Install the provider
terraform init
# Create the directory for the storage pool
mkdir lxd-xs-pool
# Preview and apply the resource changes
terraform apply
Once the resources are created, we can check that everything is working fine:
# Check resources existance in LXD
lxc network list
lxc profile list
lxc list
# Connect to an instance
lxc shell xs-master-01
Et voilà!
Note: To destroy the environment: terraform destroy
More advanced example
You can take a look at tdp-lxd for a more advanced setup with:
- More profiles
- File templating (for an Ansible inventory)
- Outputs definition
Conclusion
The combination of Terraform and LXD brings a new way of managing local development environments that has several advantages over competitors (namely Vagrant). If you are often bootstrapping this kind of environment, I suggest you give it a try!