
In the first lab, we managed our state locally in our filesystem. In the real world, this won't scale very well and might be dangerous if you and other developers are using the same code on the same infrastructure. To overcome the issue of diverging states, terraform gives you the possibility to store the state centrally. In our case, we will use the Exoscale Object Store.
export AWS_ACCESS_KEY_ID=<EXOSCALE_API_KEY>
export AWS_SECRET_ACCESS_KEY=<EXOSCALE_API_SECRET>
export TF_VAR_access_key=${AWS_ACCESS_KEY_ID}
export TF_VAR_secret_key=${AWS_SECRET_ACCESS_KEY}
set AWS_ACCESS_KEY_ID=<EXOSCALE_API_KEY>
set AWS_SECRET_ACCESS_KEY=<EXOSCALE_API_SECRET>
set TF_VAR_access_key=%AWS_ACCESS_KEY_ID%
set TF_VAR_secret_key=%AWS_SECRET_ACCESS_KEY%
terraform.tfterraform {
required_providers {
exoscale = {
source = "exoscale/exoscale"
version = "0.48.0"
}
}
backend "s3" {
bucket = "<bucket-name>"
region = "at-vie-1"
key = "terraform.tfstate"
endpoint = "https://sos-at-vie-1.exo.io"
skip_credentials_validation = true
skip_region_validation = true
}
}
provider "exoscale" {
key = var.access_key
secret = var.secret_key
}
There are lots of interesting things in there and some things changed between the first lab and this one:
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Mind, that the credentials for backends cannot be stored in variables.As in many other programming language, we have to define our variables in the code. Most of the time, we will create a separate file for our variables in terraform. Therefore, we will create a new file called vars.tf in our configuration folder and add the following content:
variable "secret_key" {
type = string
sensitive = true
}
variable "access_key" {
type = string
sensitive = true
}
There are different types of variables in terraform:
Using the sensitive = true directive, we can ensure that the content of this variable is not exposed in any output.
There are various ways to assign variables:
For this example, we already assigned environment variables; the ones which started with TF_VAR_.
Now we can initialize our new state using: terraform init. If you navigate to your bucket in the exoscale console, you should see a terraform.tfstate object now.
# Data Source for getting the ubuntu template
data "exoscale_compute_template" "ubuntu_template" {
zone = "at-vie-1"
name = "Linux Ubuntu 22.04 LTS 64-bit"
}
# Resource for podtatohead-main
resource "exoscale_compute_instance" "podtatohead-frontend" {
zone = "at-vie-1"
name = "podtatohead-frontend"
template_id = data.exoscale_compute_template.ubuntu_template.id
type = "standard.tiny"
disk_size = 10
}
# Resource for podtatohead-backend
resource "exoscale_compute_instance" "podtatohead-backend" {
zone = "at-vie-1"
name = "podtatohead-backend"
template_id = data.exoscale_compute_template.ubuntu_template.id
type = "standard.tiny"
disk_size = 10
}
terraform validate❯ terraform validate
Success! The configuration is valid.
terraform planPlan: 2 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
terraform applyterraform apply with the parameter --auto-approve (but be sure what you're doing).[...]
Plan: 2 to add, 0 to change, 0 to destroy.
exoscale_compute_instance.podtatohead-frontend: Creating...
exoscale_compute_instance.podtatohead-backend: Creating...
exoscale_compute_instance.podtatohead-backend: Still creating... [10s elapsed]
exoscale_compute_instance.podtatohead-frontend: Still creating... [10s elapsed]
exoscale_compute_instance.podtatohead-frontend: Creation complete after 19s [id=i-0024742a855b87fe8]
exoscale_compute_instance.podtatohead-backend: Creation complete after 19s [id=i-095392d47383309ba]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

templates in your terraform directorycloud_init.tpl in the templates directory and add the following:#!/bin/bash
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo mkdir -p /etc/podtatohead
sudo cat << EOF > /etc/podtatohead/servicesConfig.yaml
hat: "http://${backend_ip}:8080"
left-leg: "http://${backend_ip}:8080"
left-arm: "http://${backend_ip}:8080"
right-leg: "http://${backend_ip}:8080"
right-arm: "http://${backend_ip}:8080"
EOF
%{ if component == "frontend" }
sudo docker run -p 8080:8080 -e PORT=8080 -e PODTATO_COMPONENT=${component} -e SERVICES_CONFIG_FILE_PATH=/etc/podtatohead/servicesConfig.yaml -v /etc/podtatohead/servicesConfig.yaml:/etc/podtatohead/servicesConfig.yaml -d ${container_image}:v${podtato_version}
%{ else }
sudo docker run -p 8080:8080 -e PORT=8080 -d ${container_image}:v${podtato_version}
%{ endif }
main.tf file in your Terraform folderpodtatohead-frontend resource user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
podtatohead-backend resource user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = "", component = "backend" })
# Resource for podtatohead-main
resource "exoscale_compute_instance" "podtatohead-frontend" {
zone = "at-vie-1"
name = "podtatohead-frontend"
template_id = data.exoscale_compute_template.ubuntu_template.id
type = "standard.tiny"
disk_size = 10
user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
}
terraform validate│ Error: Reference to undeclared input variable
│
│ on main.tf line 25, in resource "exoscale_compute_instance" "podtatohead-backend":
│ 25: user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = "", component = "backend" })
│
│ An input variable with the name "podtato_version" has not been declared. This variable can be declared with a variable "podtato_version" {} block.
vars.tf in your terraform foldervariable "podtato_version" {
type = string
}
0.3.2 here.terraform.tfvars in your terraform directory and add the following contentpodtato_version="0.3.2"
The Exoscale will not cause a restart of the containers and therefore, we have to force the restart of them. As a workaround:
terraform taint exoscale_compute_instance.podtatohead-frontendterraform taint exoscale_compute_instance.podtatohead-backendThis will instruct Terraform to re-create the resources.
terraform validate should pass nowterraform plan and inspect the output:[...]
Plan: 2 to add, 0 to change, 2 to destroy.
───────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
terraform apply ssh_key="<your-key>"
security_group resource and assign it to an instance.main.tf file:resource "exoscale_security_group" "public_ingress" {
name = "public-ingress"
}
resource "exoscale_security_group_rule" "rule_http" {
security_group_id = exoscale_security_group.public_ingress.id
type = "INGRESS"
protocol = "TCP"
cidr = "0.0.0.0/0" # "::/0" for IPv6
start_port = 8080
end_port = 8080
}
resource "exoscale_security_group_rule" "rule_ssh" {
security_group_id = exoscale_security_group.public_ingress.id
type = "INGRESS"
protocol = "TCP"
cidr = "0.0.0.0/0" # "::/0" for IPv6
start_port = 22
end_port = 22
}
security_group_ids = [ exoscale_security_group.public_ingress.id ]
resource "exoscale_compute_instance" "podtatohead-frontend" {
zone = "at-vie-1"
name = "podtatohead-frontend"
template_id = data.exoscale_compute_template.ubuntu_template.id
type = "standard.tiny"
disk_size = 10
ssh_key="exoscale"
user_data = templatefile("${path.module}/templates/cloud_init.tpl", { container_image = "ghcr.io/podtato-head/podtato-server", podtato_version=var.podtato_version, backend_ip = exoscale_compute_instance.podtatohead-backend.public_ip_address, component = "frontend" })
security_group_ids = [ exoscale_security_group.public_ingress.id ]
}
terraform validateterraform planterraform applyoutputs.tf in your Terraform directory with the following contentsoutput "podtato-url" {
value = "http://${exoscale_compute_instance.podtatohead-frontend.public_ip_address}:8080"
}
terraform refresh to update the statepodtato-url which can be put in a browser, simply try to do this