Talos is awesome.
An OS for Kubernetes that’s completely API-driven and tailor-made just for running Kubernetes?
I’ve wanted that long before I actually started working on it with the team.
I want to run it everywhere.
But the desire to run it everywhere is precisely where things get interesting.
In the spirit of the OS itself being API-driven, I start asking myself things like, “How can I totally automate the creation of Talos + Kubernetes clusters?”
As I work through this exercise, I realize that in the cloud this is fairly straightforward. There are a million tools I can use to upload a Talos image to AWS, create some VMs, and profit. Talos works beautifully in AWS.
Or in GCP.
Or in Azure.
The OS pulls down a machine config from the cloud’s metadata server, configures Kubernetes, and I’m off to the races.
But what about this old laptop I’ve got in the back of a closet? What about my homelab where I’ve got 4 small boxes to prototype with?
Oh, and what about when Talos finds itself in a datacenter?
These are the thoughts that led us to create Sidero here at Talos Systems.
“Si-what-now”?
Sidero. It’s Greek for “Iron”.
And it’s a new project from the Talos team.
It’s designed from the ground up to to provide lightweight, composable tools that can be used to create bare metal Kubernetes clusters.
We created it because the other tools simply weren’t what we wanted.
We wanted something that was fully aware of our intention to use Talos, Kubernetes, and Cluster API.
We wanted to take all of the metal we could find and create our own “cloud” out of it.
We wanted to provision any number of Talos + Kubernetes clusters out of said “cloud”.
We wanted to define all the pieces as Kubernetes resources and be able to write them in YAML.
Sidero is that tool.
The Architecture
Sidero centers around the idea of having a long-standing “management plane” that orchestrates the full lifecycle of the bare metal you wish to control. This “management plane” is built upon Kubernetes, so it takes advantage of all the features K8s has.
A deployment of Sidero may look something like:
The Pieces
Because Sidero is built upon the “management plane” and thus Kubernetes, we gain the great things about Kubernetes like custom resources (CRDs).
Resources provided by Sidero are mentioned below.
Sidero itself is made currently made up of three components:
- Sidero Metadata Server: Provides a Cluster API (CAPI)-aware metadata server
- Sidero Controller Manager: Provides Environment, Server, and ServerClass resources + controllers for managing the lifecycle of metal machines. Using these resources, the Sidero Controller Manager also provides PXE and TFTP servers.
- Cluster API Provider Sidero (CAPS): A Cluster API infrastructure provider that makes use of the pieces above to spin up Kubernetes clusters. Provides MetalMachine and MetalMachineTemplate resources.
Sidero also needs these co-requisites in order to be useful:
These components and Sidero are all installed using Cluster API’s clusterctl
tool.
A management plane node may look like:
How Does It Work
Without getting too far down into the nitty, gritty details, let’s talk about the flow of creating a cluster with Sidero.
(I will be cutting some videos over the next couple of weeks that deep dive into how to do this.)
Create Management Plane
The first step is to get a management plane up and running.
The Sidero management plane can run on any Kubernetes cluster.
Installing this management plane is a simple clusterctl command:
clusterctl init --bootstrap talos --controlplane talos --infrastructure sidero
Setup Networking
Once the management plane is up and running, DHCP settings should be updated to make sure that the newly created PXE server is used for any bare metal nodes that PXE boot.
This is largely left as an exercise for the user, since the DHCP server and network settings in use can be wildly different.
Create Environment
Environments tell the PXE server what we want to return to a given server that PXE boots against it.
Environments allow tweaking the kernel and init images, as well as the kernel args that the server will use during boot-up.
Register Servers and Create ServerClass
Once everything is configured above, adding servers to the environment is as simple as booting them up. The servers will boot against the PXE service provided by Sidero, receive a “registration” image that will gather basic info about the hardware, and then add that info to a new “Server” object in the management plane.
Once the servers have been registered, they can then be grouped via a “ServerClass”. The ServerClass supports several qualifiers that give a user infinite ways of combining the bare metal into a cloud of their own, based on CPU resources, memory capacity, or other attributes.
Create Cluster
The last step is to finally create a cluster.
Anyone who has previously used Cluster API in any way will find this step very familiar.
Generating a cluster manifest is as easy as issuing:
clusterctl config cluster my-cluster -i sidero.
Once generated (and tweaked if necessary), this YAML file can then be applied with a simple kubectl apply
.
Delete Cluster
Cleaning up a cluster is a simple kubectl delete cluster my-cluster
.
Once deleted, Sidero handles everything behind the scenes to completely wipe the servers that were backing the Kubernetes cluster and return them to the pool of servers that are available for allocation.
If you want to read further, take a look at our bootstrapping guide for more info.
In Action
Seán McCord, one of our engineers here at Talos Systems, gave a great talk and demo on Sidero to the CNCF.
Check it out if you want to see Sidero in action creating a Kubernetes cluster on bare metal!
If you want to try Talos or Sidero yourself, check out https://talos.dev and https://sidero.dev.
You can also join the Talos Slack to chat with me in real-time about either project.