Etcd cluster for Kubernetes
This post is a part of the UpCloud k8s series, some of it is quite specific for upcloud (terraform scripts for example), but most of it is pretty generic. It does though, presuppose that you have an internal DNS set up already and have some understanding of terraform and ansible.
If you read the SkyDns and Etcd post, a lot of this will be quite familiar. This part is a bit more in-depth when it comes to etcd clustering and such though.
Etcd and K8S
If you have read anything else about kubernetes, you will probably have seen that it uses Etcd for service discovery and to keep some of its data in the cluster.
When running k8s it’s possible to run etcd in two ways. Both have positive sides and negative sides.
Etcd is, as said in an earlier post, a key-value storage which is quite fast and easy to run as a cluster, i.e., making it a “HA” (high availability) cluster. When you start a kubernetes cluster without specifying a etcd server, it will deploy one itself, inside kubernetes.
The positives of this is that you don’t have to set up a etcd cluster yourself, which of course requires a tad bit more work. The bad thing is that if your cluster breaks down for some reason etcd don’t persist the data, and if you have no backups you will
have to re-specify all deployments and stuff again.
The other approach have the plus side is that it does persist the services which you ran on the earlier installation, while the “negative” side is that you aught to provision, run and monitor the cluster by yourself.
Due to how easy it is to set up the in-cluster etcd, I will skip that. It’s an automatic thing in case you have not actually specified a external etcd service to use. In this part, we will focus on the etcd cluster, which we run outside of kubernetes.
Creating certificates
To keep a etcd service secure, I prefer to force the clients, peers and servers to communicate over a secure protocol, TLS. This increases the security and makes it harder for someone to listen in to the communication, which is good. So the first thing we will want to do is to create the certificates. The certificates that we have to create are the following:
- Certificate Authority
- ca.crt
- Server
- server.crt
- server.key
- Client
- client.crt
- client.key
- Peer
- peer.crt
- peer.key
I recommend that you create an intermediate certificate for the cluster itself and - preferably - if you want to go even further, create another for the etcd service itself.
If you wish to know how to set up your own certificate authority (using cfssl) and create a intermediate certificate to be able to generate further keys, please read this post.
Setup
As usual we will use terraform to provision the servers and then install all software with ansible, the OS of choice in this tutorial is Ubuntu 18.04 (TLS), CoreOS would probably be the best possible way to go, but I have more
experience with Ubuntu, so I’ll stick to that for now!
The ubuntu repository have etcd in it, it’s a 3.x version, and that will be good enough. We could build it from source, but the apt
installation is easier and faster.
Etcd is intended to run in cluster mode, it’s possible to run a single etcd instance but - to make it as stable as possible - 3 or 5 etcd servers is best (or even more!), we will use three in this tutorial. Each server will be installed and set up to use the TLS certificates so that it can communicate on secure channels, this will also require each service which communicates with the etcd service to have a client certificate to be able to authorize.
Terraform
The initial setup script to provision the servers is a lot like earlier terraform examples, the first we do is to create the vars.tf
file that the cluster will read its variables from.
vars.tf
variable "dns_domain" {
type = "string"
default = "your-internal.domain"
}
# You can choose which zone you want to use, this one is set to Germany Frankfurt.
variable "zone" {
type = "string"
default = "de-fra1"
}
# Plans used by the etcd servers, as you see, it's a list of 3 similar setups,
# etcd will love more resources if possible, so choose a plan you prefer.
variable "etcd_plans" {
type = "list"
default = [
"1xCPU-2GB",
"1xCPU-2GB",
"1xCPU-2GB"
]
}
# A ssh key is of course required to add to the servers, so that we can access them.
variable "ssh_keys" {
type = "list"
default = [
"... Your provision computers ssh key here (public key)."
]
}
# The private key mapped to the public key so that ansible can connect.
variable "ssh_private_key" {
type = "string"
default = "~/path/to/private/ssh/key"
}
The initial “upcloud_server” resource could look something like this:
etcd.tf
resource "upcloud_server" "etcd" {
# This will provision as many plans as there are indexes in the `etcd_plans` array,
# the `length` method built in to terraforms HCL makes this an easy task.
count = "${length(var.etcd_plans)"
zone = "${var.zone}}"
hostname = "etcd-${count.index}}" # By using count.index we fetch the current index of the iterator.
# Ip is needed for access via ssh, we can use ipv6 if we wish, if you don't need ipv4, just set it to false.
private_networking = true
ipv4 = true
ipv6 = true
login {
user = "root"
keys = "${var.ssh_keys}"
# No password!
create_password = false
}
storage_devices = [
# One storage device is enough for now, if you expect very very much data, more can be added.
{
tier = "maxiops"
size = 25
action = "clone" # Clone will create a new server from the 'storage' image.
storage = "Ubuntu Server 18.04 LTS (Bionic Beaver)"
}
]
provisioner "remote_exec" {
inline = [
"apt-get -qq update",
"apt-get -qq install python -y"
]
connection {
host = "${self.ipv4_address}" # If you are using ipv6, change this.
type = "ssh"
user = "root"
private_key = "${file(var.ssh_key_private)}"
}
}
}
# Ansible initiator resource.
resource "null_resource" "etcd-ansible" {
triggers = {
cluster_ids = "${join(",", sort(upcloud_server.etcd.*.id))}"
# The join function creates a string from the resources ids, we make sure that they are
# sorted, so that the trigger only changes if there is a new ID in the list, making ansible not
# run in case the servers already exists.
}
depends_on = ["upcloud_server.etcd"]
provisioner "local-exec" {
# In the environment object, we pass any env variables ansible should
# be able to reach.
environment {
DNS_IP = "internal.ip.address.to.your.dns.server"
ANSIBLE_HOST_KEY_CHECKING = false # This makes sure that the provisioning does not fail on hostkey not existing yet.
# And the join-string is a string that will be sent to each etcd server so that it knows what servers to join!
ETCD_JOIN_STRING = "${join("," formatlist("%s=https://%s.%s:2380", upcloud_server.etcd.*.hostname, upcloud_server.etcd.*.hostname, var.dns_domain)}}"
# The `formatlist` builtin function is much like a sprintf function, taking a format, variables and returning a formatted string.
}
working_dir = "../Ansible" # path to the ansible main folder where we put the ansible files.
command = "ansible-playbook -u root --private-key ${var.ssh_key_private} etcd.yml --vault-password-file = vault-password.txt -i ${join(upcloud_server.etcd.*.ipv4_address)},"
# The command you should recognize by now, the difference here is the `vault-password-file` argument, which we will talk more about soon...
}
}
The above scripts are enough to provision the servers and start the ansible playbook, so when you got the files where you want them, we go over to the next step, installing and setting up the software!
Ansible
We have talked about ansible variables and roles earlier, we will create two new roles, but before that, we will update the variables files a bit, so that we can commit our files directly to our repository without having to be scared that we will loose our private certificate keys!
In the earlier tutorial about skydns we created a common
role, this role will have to be updated and the files in the files
directory will be removed, instead of storing the certificates directly in the files directory
we add a vars
directory. inside this directory two files should be created:
certificates.yml
certificates.vault.yml
Content of the certificates.vault.yml
file:
vault:
certificates:
ca:
crt: |
-----BEGIN CERTIFICATE-----
# ... (I left out certificate information, this should be the whole cert!)
-----END CERTIFICATE-----
client:
crt: |
-----BEGIN CERTIFICATE-----
# ...
-----END CERTIFICATE-----
key: |
-----BEGIN CERTIFICATE-----
# ...
-----END CERTIFICATE-----
server:
crt: |
-----BEGIN CERTIFICATE-----
# ...
-----END CERTIFICATE-----
key: |
-----BEGIN CERTIFICATE-----
# ...
-----END CERTIFICATE-----
The |
character added after each key is a way to use multi-line variable values in YAML, make sure you indent them correctly or it will not work as intended.
The file above is a bit messy and - depending on the size of the certificates - a bit large, that does not matter though as we probably wont have to see it again for a long time!
Each entry in the maps is a certificate, in clear text. This is okay, we will not commit a clear text certificate key!
When we have created the file, we will use a ansible tool which is called ansible-vault
the vault
tool will encrypt the file and set a password to it, making it more secure!
In all honesty, I would not recommend adding the encrypted file to the repository if possible. But if you are bold and don’t think that it will be a big deal, you can do that…
The command used to encrypt the file is the following:
ansible-vault encrypt certificates.vault.yml
If you want to, you can actually create encrypted strings with ansible right away usin the encrypt_string
argument, but I will leave that out for now, as I personally prefer to have a *.vault.yml
file for my secure data!
After the vault file is encrypted, we aught to set up our other file. We could actually skip this part, but to make the ansible scripts easier to read and to make it easier to remember our variables, we edit the certificates.yml
file.
When you run your ansible playbook, you will now have to specify a password or a file which contains the password, that is what we did in the terraform command earlier, in that case, we point the script to a specific file, a file named
vault-password.txt
, this file should not be added to the repository as it will contain a clear text password for the ansible vault. Make sure you add it to .gitignore right away!
All this file requires is the password to the vault, nothing more nothing less. By adding that, ansible will be able to pick up the password from the file and then - while running the playbook - decrypt the encrypted files and read the data.
Content of the certificates.yml
file:
certificates:
ca:
crt: "{{ vault.certificates.ca.crt }}"
client:
crt: "{{ vault.certificates.client.crt }}"
key: "{{ vault.certificates.client.key }}"
server:
crt: "{{ vault.certificates.server.crt }}"
key: "{{ vault.certificates.server.key }}"
As you an see in the above variables file, we don’t add the certificates (as in the vault file) but rather the keys of the certificates in the vault file. That way, we can use the none-vault file paths and see everything in “clear text” when we work, while the actual certificates are stored inside the encrypted vault file!
This of course require us to re-write the roles/common/tasts/main.yml
file slightly too, now it should look something like this:
- name: Create certificate directories.
file:
path: "{{ item }}"
dest: directory
mode: 0600
with_items:
- /usr/local/share/ca-certificates
- /home/root/.certs/dns
- name: Add certificates to machine.
copy:
# Change: CONTENT instead of src will take a string instead and put that inside the file instead of copying a file from a local path.
content: "{{ item.certificate }}"
dest: "{{ item.dest }}"
owner: root
mode: 0600
# In the loop we changed the `file` to `certificate` and we
# reference the certificate variable instead of the certificate file.
loop:
- { certificate: "{{ certificates.ca.crt }}", dest: "/usr/local/share/ca-certificates/dns-ca.crt" }
- { certificate: "{{ certificates.ca.crt }}", dest: "/home/root/.certs/dns/ca.crt" }
- { certificate: "{{ certificates.client.crt }}", dest: "/home/root/.certs/dns/client.crt" }
- { certificate: "{{ certificates.client.key }}", dest: "/home/root/.certs/dns/client.key" }
- name: Import root cert.
shell: /usr/sbin/update-ca-certificates
# I also use to put the machines own ip and hostname into its hosts file, to make sure that it resolves to itself.
- name: Add ip hostname to hosts.
lineinfile:
path: /etc/hosts
line: "{{ ansible_eth1.ipv4.address }} {{ item }}"
with_items:
- "{{ ansible_hostname }}.your.domain" # This should be changed to your own internal domain name.
- "{{ ansible_hostname }}"
- name: Update resolv.conf head.
copy:
content: "nameserver {{ dns_ip }}" # We will pass this in as a variable later on.
dest: /etc/resolvconf/resolv.conf.d/head
- name: DNS refresh
shell: resolvconf -u
- name: Restart networking.
service:
name: networking
enabled: true
state: restarted
# The last thing that we do, is to send the machines data to the DNS so that it can register it and allow
# other servers to access it.
- name: Send hostname and ip to DNS.
uri:
body: "value=' {\"host\": \"{{ ansible_eth1.ipv4.address }}\"}'"
client_cert: "/home/root/.certs/dns/client.crt"
client_key: "/home/root/.certs/dns/client.key"
url: "https://{{ dns_ip }}:2379/v2/keys/skydns/tld/domain/{{ ansible_hostname }}" # Change to your internal DNS namespace here.
status_code: 201,202,200
method: PUT
Now that we have updated the certificates and fixed the role so that our server can add itself to the DNS, we can move on to our new roles! Initially we create two new role directories:
roles/etcd-server
roles/etcd-client
As the names indicates, one is for the servers themselves while the other is for any client which uses the etcd server. We need to create a new set of vault files and corresponding variable files:
roles/etcd-server/vars/certificates.vault.yml
roles/etcd-server/vars/certificates.yml
roles/etcd-client/vars/certificates.vault.yml
roles/etcd-client/vars/certificates.yml
The server version of the certificates.vault file should contain the following certificates in a map:
vault:
certificates:
ca:
crt: |
# ...
server:
crt: |
# ...
key: |
# ...
peer:
crt: |
# ...
key: |
# ...
While the client version should have the following:
vault:
certificates:
ca:
crt: |
# ...
client:
crt: |
# ...
key: |
# ...
Create the corresponding none-vault files without the vault
key as initial key and with references to the above values and we can move on to the roles.
Server / Peer
The server role is the first thing we will work with, the things that it will do is the following:
- Install etcd
- Updated the service file
- restart etcd
Sounds simple enough, and it kind of is… The role should look something like this:
- name: Create directorys.
file:
path: "{{ item }}"
dest: directory
mode: 0600
with_items:
- "/home/root/.certs/etcd"
- "/home/root/etcd"
- name: Add certificates to machine.
copy:
content: "{{ item.certificate }}"
dest: "{{ item.dest }}"
owner: root
mode: 0600
loop:
- { certificate: "{{ certificates.ca.crt }}", dest: "/usr/local/share/ca-certificates/etcd-ca.crt" }
- { certificate: "{{ certificates.ca.crt }}", dest: "/home/root/.certs/etcd/ca.crt" }
- { certificate: "{{ certificates.server.crt }}", dest: "/home/root/.certs/etcd/server.crt" }
- { certificate: "{{ certificates.server.key }}", dest: "/home/root/.certs/etcd/server.key" }
- { certificate: "{{ certificates.peer.crt }}", dest: "/home/root/.certs/etcd/peer.crt" }
- { certificate: "{{ certificates.peer.key }}", dest: "/home/root/.certs/etcd/peer.key" }
- name: Install etcd
apt:
name: etcd
state: present
- name: Create service file
template:
src: etcd.service.j2
dest: /lib/systemd/system/etcd.service
mode: 0600
owner: root
- name: Restart etcd
service:
daemon_reload: yes
name: etcd
enabled: yes
state: restarted
It’s a quite easy installation process, the only thing that we aught to look at a bit closer, is the template
directive, where we create a file from a j2
template.
Inside the role directory we create a templates
directory in which we put a etcd.service.j2
file, this file will then be copied by ansible to the system directory while
replacing all the placeholders with any data that is defined as a variable in the ansible playbook.
The file looks like this:
etcd.service.j2
[Unit]
After=network.target
Description=etcd - highly-available key value store
[Service]
RestartSec=10
TimeoutStartSec=3min
LimitNOFILE=65536
Restart=on-failure
Type=notify
User=root
ExecStart=/usr/bin/etcd --name {{ ansible_hostname }} \
--initial-advertise-peer-urls https://{{ ansible_hostname }}.your.domain:2380 \
--data-dir /home/root/etcd \
--client-cert-auth \
--peer-client-cert-auth \
--trusted-ca-file=/home/root/.certs/etcd/ca.crt \
--cert-file=/home/root/.certs/etcd/server.crt \
--key-file=/home/root/.certs/etcd/server.key \
--peer-trusted-ca-file=/home/root/.certs/etcd/ca.crt \
--peer-cert-file=/home/root/.certs/etcd/peer.crt \
--peer-key-file=/home/root/.certs/etcd/peer.key \
--listen-peer-urls=https://0.0.0.0:2380 \
--listen-client-urls=https://0.0.0.0:2379 \
--advertise-client-urls=https://{{ ansible_eht1.ipv4.address }}:2379 \
--initial-cluster={{ cluster_join_string }} \
--initial-cluster-state=new
[Install]
WantedBy=multi-user.target
As you see in the file, we use the server, ca and peer certificates so that the etcd service knows what certs to use to authenticate, and as the certificates are the same for each of the peers, it will work just as we want! All the variables will be replaced with the correct data from the variables in ansible when we copy it, so the file will be as systemd expects it.
When the ansible scripts are done, the etcd service will start and wait for the other services to connect, then a leader will be elected and the service will be up and running!
Client
The next part is the client role, we wont do much more than actually add the certificates in this one, as we can’t really know what services in the client that will use them, so the playbook script will look like this:
- name: Create directory for etcd certs.
file:
path: /home/root/etcd
dest: directory
mode: 0600
- name: Add certificates to machine.
copy:
content: "{{ item.certificate }}"
dest: "{{ item.dest }}"
owner: root
mode: 0600
loop:
- { certificate: "{{ certificates.ca.crt }}", dest: "/usr/local/share/ca-certificates/etcd-ca.crt" }
- { certificate: "{{ certificates.ca.crt }}", dest: "/home/root/.certs/etcd/ca.crt" }
- { certificate: "{{ certificates.client.crt }}", dest: "/home/root/.certs/etcd/client.crt" }
- { certificate: "{{ certificates.client.key }}", dest: "/home/root/.certs/etcd/client.key" }
As we use a certificate authority which is signed by our root certificate, the one that copy and update with the common
role, we should be able to connect to the etcd server without the ca.crt
public cert, but we copy it either way.
Playbook
The client role will not be used in this tutorial, we will use that later on when we provision our kubernetes master servers, but we keep it in our file structure so that it is ready when we need it, the server role though, that one will be used right now, to create the etcd servers!
root/etcd.yml
---
- name: Etcd peer playbook
become: true
hosts: all
gather_facts: yes
roles:
- { role: common, dns_ip: "{{ lookup('env', 'DNS_IP') }}" }
- { role: etcd-server, cluster_join_string: "{{ lookup('env', 'ETCD_JOIN_STRING') }}" }
- { role: firewall }
The above playbook will run the common
role (passing the DNS_IP
as a defined variable which is fetched from the ‘env’), then the etcd-server
role will run, we pass the ETCD_JOIN_STRING
so that etcd know what servers to expect to join with
then at the end, we run a not-yet defined role, called firewall. This one we will look at in the next part of the tutorial, so that our server is closed to the outside but exposed to the internal network!