In my last post I wrote about docker buildx / buildkit and how to build multi-arch images. In this part I will try to take it a bit further and try to describe how I compile code to use in my multi-arch images.
I will start with a short problem description, so that you may understand why I would want to do this instead of just using buildx and compile inside the images….
So… Qemu is great and powerful, but… it have a drawback… Qemu is slower than a real machine, and with that, it’s quite
a lot slower.
From my research and understanding, qemu does not use more than one “real” thread at a time, basically making it single threaded so when compiling for example PHP, it takes hours, something that is quite annoying, especially when you try to work on a new image that fails three hours in to it’s build time…
So how to solve this?
Well, I figured that there are at the least two possible solutions for this.
The first is to cross-compile, that is, use a compiler that compiles code for another architecture than the one that is running it.
Most of my machines are AMD64, so the cross-compiler should be on a AMD build machine, while it should compile for ARM64 and possibly i386, arm7, ppc64 and s390x.
The second possible solution would be to spin up a new instance which can build natively for the platform that I wish to use. My first real target except from AMD64 is ARM64, so that would require a new ARM machine or VPS.
I started out with the first option, cross-compiling.
For someone like me whom have never actually worked with cross-compiling in this way before, this was something new. The first thing I tried (as I build for musl-libc primary, not glibc), was to download pre-built and pre-packaged toolchains for cross-platform compiling.
I got it to work… basically… I could build a few packages without any issues, but when it comes to linking, configuring and all of that (that is, putting the packages together in the final binary that I wanted), it became quite hard.
After working with it for quite a while, I was able to compile PHP (which was my first thing to compile), yay I thought…
Did it run on aarch64 then? No… it did not…
Most of the issues I experienced with the initial toolchains was probably not depending on the toolchain, but I decided to
try build my own just to be sure that it was not there the issue was.
So I built my toolchains… This took a while too, as building toolchains like this is kind of complicated if you are not used to it…
At the end, I ended up at the same place as before. I could compile the binaries but they would not run…
So I actually gave up. I did learn a lot, quite a lot, but the actual thing I wanted to get working was not working at all.
Third try (is the charm!)
My third try was to try with the second option. I deployed a new ARM64 instance and started to set up all the software that I needed…
Now, when I build my docker images I use GitLab and the CI that gitlab provides, I have my own runners and to do this, I thought that the best way to start was to set up a native aarch64 gitlab runner which would be used mainly to build the heavy load when it comes to images. That is, compiling and stuff like that.
GitLab do not provide binaries for aarch64 since version ~10…
I was lucky though, someone on the internet have been nice enough to produce a gitlab runner docker image for aarch64 which is version 11.2, it’s good enough for now, while I would prefer a newer version…
If you would like to try the image out, it can be found at GitHub!
I finally got the runner up and I started trying it out.
It actually worked (with some limitations due to it being an old version of the runner) and I was happy!
When I first tested I used quite a small VPS, one with 2gb ram and 4xarm64 cpus. This was really not enough for me so I had to update to a larger instance. ARM machines are quite cheap though, so I would recommend using a decently sized one instead of the cheapest.
While this is working fine, I would rather not have to run multiple gitlab runners if not really needed. So my next step
is to add my new ARM instance (maybe more in the future??) to my buildx node, this will enable me to use a single runner
which can build x86_64 on one machine and aarch64 on another while still using the same docker daemon (and exact same ci jobs) instead
of having to use tags on my runners to switch between contexts.
The current setup works okay, but it’s not perfect, and that’s what I want!
Why do I want to build for aarch64 anyways? What’s the reason?
ARM machines are cheap, and they are just as easy to work with as with x86_64 machines. The only issue is that not all software runs on them.
Being able to run docker and to run my own docker images on it does remove this issue though. That way I can use aarch64 to lower costs for both myself and my customers.
If you wish to take a look at my tries with cross-compilation, something I might take a look at again some day, feel free to check out the repositories I created for it:
Package building scripts: https://gitlab.com/jitesoft/pre-compiled/musl
I succeeded in adding the aarch machine to my buildx instance, it runs flawless and I will likely write a post later about this!