Big Picture

In the first lab, we managed our state locally in our filesystem. In the real world, this won't scale very well and might be dangerous if you and other developers are using the same code on the same infrastructure. To overcome the issue of diverging states, terraform gives you the possibility to store the state centrally. In our case, we will use the Exoscale Object Store.

Creating a storage bucket

Set some environment variables (Linux / MacOS)

export AWS_ACCESS_KEY_ID=<EXOSCALE_API_KEY>
export AWS_SECRET_ACCESS_KEY=<EXOSCALE_API_SECRET>
export TF_VAR_access_key=${AWS_ACCESS_KEY_ID}
export TF_VAR_secret_key=${AWS_SECRET_ACCESS_KEY}

Set some environment variables (Windows)

set AWS_ACCESS_KEY_ID=<EXOSCALE_API_KEY>
set AWS_SECRET_ACCESS_KEY=<EXOSCALE_API_SECRET>
set TF_VAR_access_key=%AWS_ACCESS_KEY_ID%
set TF_VAR_secret_key=%AWS_SECRET_ACCESS_KEY%

Initialize the new state

terraform {
  required_providers {
    exoscale = {
      source  = "exoscale/exoscale"
      version = "0.48.0"
    }
  }
  backend "s3" {
    bucket                      = "<bucket-name>"
    region                      = "at-vie-1"
    key                         = "terraform.tfstate"
    endpoint                    = "https://sos-at-vie-1.exo.io"
    skip_credentials_validation = true
    skip_region_validation      = true
  }
}

provider "exoscale" {
  key    = var.access_key
  secret = var.secret_key
}

There are lots of interesting things in there and some things changed between the first lab and this one:

Define Variables

As in many other programming language, we have to define our variables in the code. Most of the time, we will create a separate file for our variables in terraform. Therefore, we will create a new file called vars.tf in our configuration folder and add the following content:

variable "secret_key" {
  type = string
  sensitive = true
}

variable "access_key" {
  type = string
  sensitive = true
}

There are different types of variables in terraform:

Using the sensitive = true directive, we can ensure that the content of this variable is not exposed in any output.

Assigning Variables

There are various ways to assign variables:

For this example, we already assigned environment variables; the ones which started with TF_VAR_.

Setting up the state

Now we can initialize our new state using: terraform init. If you navigate to your bucket in the exoscale console, you should see a terraform.tfstate object now.

# Data Source for getting the ubuntu template
data "exoscale_compute_template" "ubuntu_template" {
  zone = "at-vie-1"
  name = "Linux Ubuntu 22.04 LTS 64-bit"
}

# Resource for podtatohead-main
resource "exoscale_compute_instance" "podtatohead-frontend" {
  zone = "at-vie-1"
  name = "podtatohead-frontend"

  template_id = data.exoscale_compute_template.ubuntu_template.id
  type        = "standard.tiny"
  disk_size   = 10
}

# Resource for podtatohead-backend
resource "exoscale_compute_instance" "podtatohead-backend" {
  zone = "at-vie-1"
  name = "podtatohead-backend"

  template_id = data.exoscale_compute_template.ubuntu_template.id
  type        = "standard.tiny"
  disk_size   = 10
}
❯ terraform validate
Success! The configuration is valid.
Plan: 2 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
[...]

Plan: 2 to add, 0 to change, 0 to destroy.
exoscale_compute_instance.podtatohead-frontend: Creating...
exoscale_compute_instance.podtatohead-backend: Creating...
exoscale_compute_instance.podtatohead-backend: Still creating... [10s elapsed]
exoscale_compute_instance.podtatohead-frontend: Still creating... [10s elapsed]
exoscale_compute_instance.podtatohead-frontend: Creation complete after 19s [id=i-0024742a855b87fe8]
exoscale_compute_instance.podtatohead-backend: Creation complete after 19s [id=i-095392d47383309ba]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Inspect our environment

Set up the Cloud Init Templates

#!/bin/bash

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

sudo mkdir -p /etc/podtatohead
sudo cat << EOF > /etc/podtatohead/servicesConfig.yaml
hat:       "http://${backend_ip}:8080"
left-leg:  "http://${backend_ip}:8080"
left-arm:  "http://${backend_ip}:8080"
right-leg: "http://${backend_ip}:8080"
right-arm: "http://${backend_ip}:8080"
EOF

%{ if component == "frontend" }
sudo docker run -p 8080:8080 -e PORT=8080 -e PODTATO_COMPONENT=${component} -e SERVICES_CONFIG_FILE_PATH=/etc/podtatohead/servicesConfig.yaml -v /etc/podtatohead/servicesConfig.yaml:/etc/podtatohead/servicesConfig.yaml -d ${container_image}:v${podtato_version}
%{ else }
sudo docker run -p 8080:8080 -e PORT=8080 -d ${container_image}:v${podtato_version}
%{ endif }

Use the Cloud Init Templates

  user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
  user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = "", component = "backend" })
# Resource for podtatohead-main
resource "exoscale_compute_instance" "podtatohead-frontend" {
  zone = "at-vie-1"
  name = "podtatohead-frontend"

  template_id = data.exoscale_compute_template.ubuntu_template.id
  type        = "standard.tiny"
  disk_size   = 10
  user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
}

Validate the configuration

│ Error: Reference to undeclared input variable
│
│   on main.tf line 25, in resource "exoscale_compute_instance" "podtatohead-backend":
│   25:   user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = "", component = "backend" })
│
│ An input variable with the name "podtato_version" has not been declared. This variable can be declared with a variable "podtato_version" {} block.

Defining Variables

variable "podtato_version" {
  type = string
}

Assigning Variables

podtato_version="0.3.2"

Tainting Resources

The Exoscale will not cause a restart of the containers and therefore, we have to force the restart of them. As a workaround:

This will instruct Terraform to re-create the resources.

Plan and apply the configuration

[...] 
Plan: 2 to add, 0 to change, 2 to destroy.

───────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Open a shell to your instances

Add your SSH Key to the configuration

  ssh_key="<your-key>"
resource "exoscale_security_group" "public_ingress" {
  name = "public-ingress"
}

resource "exoscale_security_group_rule" "rule_http" {
  security_group_id = exoscale_security_group.public_ingress.id
  type              = "INGRESS"
  protocol          = "TCP"
  cidr              = "0.0.0.0/0" # "::/0" for IPv6
  start_port        = 8080
  end_port          = 8080
}

resource "exoscale_security_group_rule" "rule_ssh" {
  security_group_id = exoscale_security_group.public_ingress.id
  type              = "INGRESS"
  protocol          = "TCP"
  cidr              = "0.0.0.0/0" # "::/0" for IPv6
  start_port        = 22
  end_port          = 22
}

Assign the Security Groups

 security_group_ids = [ exoscale_security_group.public_ingress.id ]
resource "exoscale_compute_instance" "podtatohead-frontend" {
  zone = "at-vie-1"
  name = "podtatohead-frontend"

  template_id = data.exoscale_compute_template.ubuntu_template.id
  type        = "standard.tiny"
  disk_size   = 10
  ssh_key="exoscale"
  user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
  security_group_ids = [ exoscale_security_group.public_ingress.id ]
}

Apply the configuration

output "podtato-url" {
  value = "http://${exoscale_compute_instance.podtatohead-frontend.public_ip_address}:8080"
}