It bugs me that this implementation detail of containerd has leaked to such an extent. This should be part of the containerd distribution, and should not be pulled at runtime.
Instead of just swapping out the registry, try baking it into your machine image.
Relying on an hosted image also caused some disruptions for Nomad (the scheduler from Hashicorp), because the default pause image was hosted at gcr.io which google killed, and it moved to registry.k8s.io.
More general one would wish that Kubernetes had a few extra ways to get images, so you could grow on a scale from "minimal infrastructure" to "fully CI/CD". Starting with just sending the image in the RPC itself or even just on local disk (you figure out how to get it there), all the way up to registries with tightly controlled versioning.
It's possible to do that, as kubernetes only passes the image information to CRI.
You can also setup a separate service to "push" images directly to your container runtime, someone even demoed one in Show HN post some time ago I think.
Also, if you are using kubeadm to create your cluster, beware that kubeadm may be pre-pulling a different pause image if it does not match your containerd configuration:
https://github.com/kubernetes/kubeadm/issues/2020
Just to save someone 5 minutes of research, if you are using the EKS AMIs based on AL2023 or Bottlerocket, this is already done for you by pointing to an image on ECR. At least on Bottlerocket, I haven't checked AL2023, the image is baked into the AMI so you don't even need to pull it from ECR.
O/T, but I'm getting a cert error on this page - wonder if it's just me or if this site is just serving a weird cert. Looks like it's signed by some Fortinet appliance - maybe I'm getting MITMed? Would be kind of exciting/frightening if so.
EDIT: I loaded the page from a cloud box, and wow, I'm getting MITMed! Seems to only be for this site, wonder if it's some kind of sensitivity to the .family TLD.
I believe this has been patched time and time again in on-premises variants like OpenShift. Curious to check if it’s there in small variants like microk8s, k3s, etc., as I’m considering moving a few offline services to Talos.
I've used k8s before a lot and at several companies. I am convinced that 99.9% of the people who use it should not be. But it's more fun than deploying VM images at least.
I'm running k3s at home on single node with local storage. Few blogs, forum, minIO.
Very easy, reliable.
Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...
So, I'm convinced that quite a few people can happily and efficiently use k8s.
In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.
Talos has it's own API that you interact with primarily through the talosctl command line. You apply a declarative machineconfig.yaml with which custom settings can be set per-node if you wish.
I use k3s for my home and for dev envs I think it's completely fine especially when it comes to deployment documentation.
I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Well... it's what I would have said until bitnami pulled the rug and pretty much ruined the entire ecosystem as now you don't have a way to pull something that you know is trusted with similar configuration and all from a single repository which makes deployments a pain in the ass.
However, on the plus side I've just been creating my own every time I need one with the help of claude using bitnami as reference and honestly it doesn't take that much more time and keeping them up to date is relatively easy as well with ci automations.
> I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Thoughts on Tmux-resurrect[1] , it can even resurrect programs running inside of it as well. It feels like it can as such reduce complexity from something like k3s back to tmux. What are your thoughts on it?
Well firstly I would love to know more about your workflow where it actually broke etc. because I feel like tmux-ressurect team could help or something for sure.
I haven't used the tool itself so I am curious as I was thinking of a similar workflow as well sometime ago
Now please answer the above questions but also I am going to assume that you are right about tmux-ressurect, even then there are other ways of doing the same thing as well.
This mentions either Criu if you want a process to persist after a shutdown, or the shutdown utility's flags if you want to temporarily do it.
I have played around with Criu and docker, docker can even use criu with things like docker checkpoint and I have played with that as well (I used it to shutdown mid compression of a large file and recontinue compression exactly from where I left)
What are your thoughts on using criu+docker/criu + termux, I think that it itself might be an easier thing than k3s for your workflow.
Plus, I have seen some people mention vps where they are running the processes for 300 days or even more without a single shutdown iirc and I feel like modern VPS providers are insanely good at uptime, even more so than sometimes cloud providers.
Same here, I went through a few projects since 2021 where doing Kubernetes setups were part of my role on the consulting project, and I would say prefer managed containers solutions, e.g. Azure Web Apps, or when running locally plain systemd or Docker Compose.
Anything else, most companies aren't Web scale enough to set their full Kubernetes clusters with failover regions from scratch.
Yeah that pause image was really annoying when I was hosting a k8s cluster on Hetzner, since the `registry.k8s.io` -registry was blocking some Hetzner IPs, since its hosted on Google.
Nice to know, though I wonder how many companies are really using all private images? I've certainly had a client running their own Harbor instance, but almost all others pulled from Docker Hub or Github (ghcr.io).
Lots of medical and governmental organisations are not allowed to run in public cloud environments. It's part of my job to help them get set up. However, in reality that often boils down to devs wining about adding a registry to Harbor to cache; nobody is going to recompile base images and read through millions of lines of third party code.
A lot of security is posturing and posing to legally cover your ass by following an almost arbitrary set of regulations. In practice, most end up running the same code as the rest of us anyway. People need to get stuff done.
The Public Sector and anyone concerned with compliance under the Cyber Resilience Act should really use their own private image store. Some do, some don't.
It bugs me that this implementation detail of containerd has leaked to such an extent. This should be part of the containerd distribution, and should not be pulled at runtime.
Instead of just swapping out the registry, try baking it into your machine image.
Relying on an hosted image also caused some disruptions for Nomad (the scheduler from Hashicorp), because the default pause image was hosted at gcr.io which google killed, and it moved to registry.k8s.io.
The nomad team made this configurable afterwards.
It's implementation of cri plugin.
> This should be part of the containerd distribution
containerd is not the only CRI runtime out there.
More general one would wish that Kubernetes had a few extra ways to get images, so you could grow on a scale from "minimal infrastructure" to "fully CI/CD". Starting with just sending the image in the RPC itself or even just on local disk (you figure out how to get it there), all the way up to registries with tightly controlled versioning.
It's possible to do that, as kubernetes only passes the image information to CRI.
You can also setup a separate service to "push" images directly to your container runtime, someone even demoed one in Show HN post some time ago I think.
I went down this rabbit hole not so long ago too.
There was a discussion open on containerd's GitHub on removing the dependency on the pause image but it has been closed as won't fix: https://github.com/containerd/containerd/issues/10505
Also, if you are using kubeadm to create your cluster, beware that kubeadm may be pre-pulling a different pause image if it does not match your containerd configuration: https://github.com/kubernetes/kubeadm/issues/2020
Just to save someone 5 minutes of research, if you are using the EKS AMIs based on AL2023 or Bottlerocket, this is already done for you by pointing to an image on ECR. At least on Bottlerocket, I haven't checked AL2023, the image is baked into the AMI so you don't even need to pull it from ECR.
We removed the image registry dependency on AL2023 as well. :)
https://github.com/awslabs/amazon-eks-ami/pull/2000
Thank you, was just about to task my team with figuring out how affected we are by this.
O/T, but I'm getting a cert error on this page - wonder if it's just me or if this site is just serving a weird cert. Looks like it's signed by some Fortinet appliance - maybe I'm getting MITMed? Would be kind of exciting/frightening if so.
EDIT: I loaded the page from a cloud box, and wow, I'm getting MITMed! Seems to only be for this site, wonder if it's some kind of sensitivity to the .family TLD.
Ooft. If it helps, this is the PEM I'm getting. LetEncrypt signed.
I believe this has been patched time and time again in on-premises variants like OpenShift. Curious to check if it’s there in small variants like microk8s, k3s, etc., as I’m considering moving a few offline services to Talos.
Talos' KubeSpan is backed by Sidero-hosted disovery service that cannot be self-hosted without a commercial license
I've used k8s before a lot and at several companies. I am convinced that 99.9% of the people who use it should not be. But it's more fun than deploying VM images at least.
I'm running k3s at home on single node with local storage. Few blogs, forum, minIO.
Very easy, reliable.
Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...
So, I'm convinced that quite a few people can happily and efficiently use k8s.
In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.
Check out Talos Linux if you haven't already, it's pretty cool (if you want k8s).
How do you manage node settings k8s does not yet handle with Talos?
Talos has it's own API that you interact with primarily through the talosctl command line. You apply a declarative machineconfig.yaml with which custom settings can be set per-node if you wish.
I use k3s for my home and for dev envs I think it's completely fine especially when it comes to deployment documentation.
I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Well... it's what I would have said until bitnami pulled the rug and pretty much ruined the entire ecosystem as now you don't have a way to pull something that you know is trusted with similar configuration and all from a single repository which makes deployments a pain in the ass.
However, on the plus side I've just been creating my own every time I need one with the help of claude using bitnami as reference and honestly it doesn't take that much more time and keeping them up to date is relatively easy as well with ci automations.
> I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Thoughts on Tmux-resurrect[1] , it can even resurrect programs running inside of it as well. It feels like it can as such reduce complexity from something like k3s back to tmux. What are your thoughts on it?
[1]:https://github.com/tmux-plugins/tmux-resurrect?tab=readme-ov...
I had it break enough times to where I just don't bother.
Well firstly I would love to know more about your workflow where it actually broke etc. because I feel like tmux-ressurect team could help or something for sure.
I haven't used the tool itself so I am curious as I was thinking of a similar workflow as well sometime ago
Now please answer the above questions but also I am going to assume that you are right about tmux-ressurect, even then there are other ways of doing the same thing as well.
https://www.baeldung.com/linux/process-save-restore
This mentions either Criu if you want a process to persist after a shutdown, or the shutdown utility's flags if you want to temporarily do it.
I have played around with Criu and docker, docker can even use criu with things like docker checkpoint and I have played with that as well (I used it to shutdown mid compression of a large file and recontinue compression exactly from where I left)
What are your thoughts on using criu+docker/criu + termux, I think that it itself might be an easier thing than k3s for your workflow.
Plus, I have seen some people mention vps where they are running the processes for 300 days or even more without a single shutdown iirc and I feel like modern VPS providers are insanely good at uptime, even more so than sometimes cloud providers.
Same here, I went through a few projects since 2021 where doing Kubernetes setups were part of my role on the consulting project, and I would say prefer managed containers solutions, e.g. Azure Web Apps, or when running locally plain systemd or Docker Compose.
Anything else, most companies aren't Web scale enough to set their full Kubernetes clusters with failover regions from scratch.
I like Docker(compose) + Portainer for small deployments
What makes you come to that conclusion?
They’ve never worked on a real soa/multi-team/microservices project with more than 20 separate deployments before and assumes no one else does.
Yeah that pause image was really annoying when I was hosting a k8s cluster on Hetzner, since the `registry.k8s.io` -registry was blocking some Hetzner IPs, since its hosted on Google.
Nice to know, though I wonder how many companies are really using all private images? I've certainly had a client running their own Harbor instance, but almost all others pulled from Docker Hub or Github (ghcr.io).
Lots of medical and governmental organisations are not allowed to run in public cloud environments. It's part of my job to help them get set up. However, in reality that often boils down to devs wining about adding a registry to Harbor to cache; nobody is going to recompile base images and read through millions of lines of third party code.
A lot of security is posturing and posing to legally cover your ass by following an almost arbitrary set of regulations. In practice, most end up running the same code as the rest of us anyway. People need to get stuff done.
Pretty much all enterprises are using their own ECR/GCR/ACR.
The Public Sector and anyone concerned with compliance under the Cyber Resilience Act should really use their own private image store. Some do, some don't.
I work on the container registry team at my current company running a custom container registry service!
How does this require a whole team? Unless you're working at a hyperscaler
Maybe they work for docker
Easy. Dont use kubernetes. You'll thank me later.