7 minute read

This article is part 2 in a series: HA K3s with terraform and ansible



In this post, I’ll go through how to set up a new network in Hetzner cloud with the help of terraform.
This is usually a good first step on the road to a cluster, seeing that without a network, we would have to use external traffic for all communication, and even if hetzner is very generous with the egress limits on the VM:s, it feels quite dumb.

So, to start of we need to get a hold of an API key for the hetzner provider.

Hetzner project and API key

When you first enter the hetzner cloud dashboard, you will be able to create a new project.
The API key we will use is connected to a single project, so it won’t have access to your other projects, seeing the key is specific for the project, that is all that is needed to actually provision machines.

After creating the project, select ‘Security’ in the sidebar, toggle to ‘API tokens’ tab and generate a new token.
Make sure you select “read/write” rather than just read, else the Token won’t allow terraform to actually provision anything.

img.png

Terraform providers

As you probably know, you can use either Terraform or OpenTofu for provisioning. Either tool is good, and you should choose the one fitting your preferences. Still, I will be using terraform in most commands in the series, which, if you use OpenTofu, just requires you to swap terraform to tofu in the commands.

After creating our token, we can start setting up our tf project.

The initial file structure could look something like this:

network/
  _providers.tf
  main.tf
  variables.tf

Inside the providers.tf file, we add the setup code for the providers we will be using. In this case, just the hetzner provider:

terraform {
  required_providers {
    hcloud = {
      source = "hetznercloud/hcloud"
      version = "1.49.1"
    }
  }
}

provider "hcloud" {
  token = var.hcloud_token
}

Make sure you check the terraform registry and change the version of the provider accordingly.

As you might see, we are using a var in the hcloud provider. This variable is something we will add to the variables.tf file, and force the “user” to provide on provisioning:

variable "hcloud_token" {
  sensitive = true
}

The sensitive flag in the variable will hide the token from terraform output, while it might be worth knowing that it will not hide it from the state file. So if you intend to share state, I would recommend that you use the ephemeral flag which was recently introduced to terraform.

When this is done, we can initialize the project:

terraform init
# or
tofu init

This will download the provider from the registry and allow us to use it.

Add a network

We will use one big network for the cluster, while all different resources will be using different subnets, this to make sure we can set up firewalls accordingly and make the IP-addresses a bit nicer to look at ;)

The first part we need to consider is the size of the network.
If you want to read up on how an ip CIDR works, feel free to read my post about this here.

In my case, I prefer to use a /16 network for my project, which will give me 65536 internal IP addresses.
This might feel quite a bit over the top, and surely it is, but each network can pretty much use a full private ip-range, so rather go big than too low!

Add the following code to the variables file:

variable "core_cidr" {
  type = string
  default = "10.1.0.0/16"
  description = "CIDR used by the core network"
  
  validation {
    condition = try(
      "" != var.core_cidr,
      regex("(\\d{1,3}[.]\\d{1,3}[.]\\d{1,3}[.]\\d{1,3}[/]\\d{1,2})", var.core_cidr)
    )
    error_message = "Must be a valid cidr / subnet (example: 10.1.0.0/16)."
  }
}

The above configuration will give us a IP range between 10.1.0.0 to 10.1.255.255.

The variable above contains quite a bit more stuff than when we added the token, this is because I prefer it to actually be assignable on the provisioning step (I usually go with the default value, but I might want to use another network later on).

The big difference is the validation clause, which takes the variable and makes sure its actually a valid-ish CIDR. As you may see (if you know regex a bit), i’ve been lazy, and 999.999.999/99 would be allowed in the regex, so if you want more safety, make sure to update the regex!

Main.tf

Our next step is to add the network resource to the main.tf terraform file.

This is quite a simple resource with just a few required parameters, and it looks like this:

resource "hcloud_network" "core" {
  name     = "my-core-network"
  ip_range = var.core_cidr

  labels = {
    "usage" = "kubernetes"
    "identifier"  = "core"
  }
}

We want to use a specific name here, as we will want to be able to import the network as a datasource later on, and I have as well added a few labels to the resource to make sure I can select on it if I don’t want to use the name.

With this, we actually have everything we need to create a network in the project at hetzner.

You can plan and apply the resource with the following commands:

terraform plan # Will show you what will be created in hetzner.
terraform apply # Will actually provision the resources.

Subnets

As I earlier mentioned, we will want a few subnets in the network.
The following networks are the ones that I will be using:

Master nodes: 10.1.10.0/27 (10.1.10.0 - 10.1.10.31), 32 IP addresses.
Minion nodes: 10.1.20.0/23 (10.1.10.0 - 10.1.11.255) 512 IP addresses.
Misc: 10.1.5.0/24 (10.1.5.0 - 10.1.5.255) 256 IP addresses.

These networks can be increased when we need to, and are currently just using a very small part of the internal network.
As you can see, there are 32 ip addresses in the master nodes network (and excluding gateway and such only 30), but I don’t think that my cluster will ever use more than 30 master nodes, and if that happens, I will increase the size!

The Misc network will be used for stuff like load balancers and other resources that we might be needing later on.

Set up master subnet

I don’t want to jump ahead too far yet, but we might as well set up the master subnet right away, so that it is ready for when we want to provision our master nodes…

A subnet in hetzner is just another resource to be provisioned, and it looks as the following:

resource "hcloud_network_subnet" "master-net" {
  ip_range     = var.master_cidr
  network_zone = "eu-central"
  type         = "cloud"
  network_id   = hcloud_network.core.id
}

In the current form, I decided to put it in the same main.tf file as the core network, but if you want to split it up, you can retrieve the core network as a datasource:

data "hcloud_network" "core" {
  name = "my-core-network"
}

resource "hcloud_network_subnet" "master-net" {
  ip_range     = var.master_cidr
  network_zone = "eu-central"
  type         = "cloud"
  network_id   = data.hcloud_network.core.id
}

Just as with the core network CIDR, we will want to add the master network CIDR to the variables file:

variable "master_cidr" {
  type = string
  default = "10.1.10.0/27"

  description = "IP Address range for Master nodes"

  validation {
    condition = try(
      "" != var.master_cidr,
      regex("(\\d{1,3}[.]\\d{1,3}[.]\\d{1,3}[.]\\d{1,3}[/]\\d{1,2})", var.master_cidr)
    )
    error_message = "Must be a valid cidr / subnet (example: 10.1.10.0/27)."
  }
}

When this is done, we can run the apply command again, and then check the hetzner dashboard to make sure our network exists.

Final thoughts

Using terraform is quite simple when it comes to smaller projects, but as it grows, it gets more and more important to set up a good file structure as well as structure inside the files.
When we later on add Ansible playbooks and various templates and files to the project, this will become more and more obvious.

As always, let me know in the comments below if you find any oddities or issues!