I have read multiple times this article about running Kubernetes to run small projects and thought I could share why I think that might not be a great idea.

The article is called "Kubernetes: The Surprisingly Affordable Platform for Personal Projects". Read it! it's good! I think it was great as an introduction to what Kubernetes can do, even though I didn't agree with the point is trying to make.

Before we start, I want to clarify that I'm assuming here that you are not running already a Kubernetes cluster. If you already have a cluster, it might make perfect sense to run a project there. After all, you already have all the infra ready. If you don't, though, I don't think is a good idea to build a cluster just for a project. Lets see why.


Let's start by saying that the article assumes you are already using docker containers, since it doesn't really mention the fact that for running your project in Kubernetes, you will need to build a container for it in the first place. You might need more than one, actually (dev, prod, and service dependencies).

I'm working on a side project now. I don't run any containers. I've just installed everything in my laptop and I compile and run everything from there. You could argue that one day I will try to move to a different laptop and I will have to fight my way through all the issues that I will find while trying to do so, and you will be right in doing so. That is the price I pay for not having to deal with anything related to containers. Luckily, my project is not really that complicated, so I might find some issues with some dependencies, but I will probably be alright.

Then it continues explaining some of the decisions that you will need to make when you start thinking about running your project, with the idea of showing how Kubernetes will help on every one of these points. I will try to give some answers as how I think about all of these decisions when I'm thinking on an small project.

How do you deploy your application? Just rsync it to the server?

Why not? Sounds good to me. It is not very fancy, to be sure, but on the other hand why do you really need something more powerful?

What about dependencies? If you use python or ruby you're going to have to install them on the server. Do you intend to just run the commands manually?

Maybe. Maybe not. I happen to know how to use Ansible already and I will maybe consider writing an Ansible playbook. Installing packages from the official repos sounds good to me as well.

How are you going to run the application? Will you simply start the binary in the background and nohup it? That's probably not great, so if you go the service route, do you need to learn systemd?

I would say this depends on what I will be running, but systemd doesn't sound that bad.

How will you handle running multiple applications with different domain names or http paths? (you'll probably need to setup haproxy or nginx)

Nginx virtual hosts. The article later on actually deploys Nginx containers in Kubernetes as a DaemonSet and creates a virtual host for the app anyway (because apparently HTTP Load Balancing in GCE is expensive). Is a single Nginx virtual host more complex or expensive than deploying a Nginx daemon set and virtual host in Kubernetes? I don't think so.

Suppose you update your application. How do you rollout the change? Stop the service, deploy the code, restart the service? How do you avoid downtime?

I would do that, probably. There are tools to actually do zero-downtime deployments (puma in ruby, for instance). Of course, they won't work for any project. I would wonder though if zero-downtime deployments are actually a requirement. My blog would easily go down if I did any maintenance on it, and I would be fine with it.

What if you screw up the deployment? Any way to rollback? (Symlink a folder...? this simple script isn't sounding so simple anymore)

I would probably just deploy again an older version.

Does your application use other services like redis? How do you configure those services?

Install from official distro repos and configure the main config file. Might move to an Ansible playbook at some point as well, specially if it has some specific or weird config.

I think the point I'm trying to make is: do you actually need all of this? I wonder why someone running a 5$ Kubernetes cluster is even being concerned about all of this stuff.

I believe that someone that is paying 5$ a month to run a side project shouldn't be concerned about infra (yet). I would recommend you focus on what you are building. Every minute you are spending on improving the infra (for uses cases that you might not use), you could be spending on coding a new feature, fixing that bug that you've just discovered, or writing more content for your website.

Most of the times, I think I would probably prefer to fight with my crappy deployment script from time to time, than having to deal with all the complexity that running Kubernetes involves.


Complexity is not only about the price you pay to learn the tools. It is also to maintain and fix issues on them, now and in the future.

I think the article made a good point about what kind of things usually can go wrong if you go the "traditional" way (say, issues with a bad deployment). It doesn't mention anything bad about running Kubernetes, though. It seems like everything is going to always work.

Think of all the moving pieces that a Kubernetes cluster involves. All of them can fail at any point (and some of them will be considerably more complicated than a script that scp's some files). You are basically increasing the surface of potential issues of your service.

When you do a change in your Kubernetes cluster in 6 months, will you remember all the information you have today? What about DNS or the Go app you had to build to maintain them? Will the work all the time? Will you need to maintain them? Are you thinking on how to update Kubernetes or some of its components? (like the nginx controller?) What about the RBAC rules?  I remember when RBAC was introduced and you basically were forced to create RBAC roles for most of the stuff. Will something like that happen in the future? How long will it take? ...

It's true that GCE maintain the master nodes of the cluster and that helps, but doesn't really eliminate any of the issues mentioned above. In my experience, master nodes were not even that problematic, since they are not running any user workloads.

Kubernetes scales

It does scale to a big number of services and machines, but why would you need that if you are doing a small project?

How many services are you planning on running there? Is your service going to grow? When it actually needs to grow, will your service actually be able to work in a distributed way?

I completely understand the logic behind leaving the door open in case you need it in the future. My point is, when leaving that door open is slowing you down and making everything more complex, I would recommend you to close it.

You can always move to Kubernetes or something similar later on if you need to, and it shouldn't be difficult to do so. Why pay all the price to get nearly none of the advantages? Keep it simple, you (probably) aren't gonna need it.

Maybe if you are running tens of side projects and have a considerable number of machines that might be worth? Who knows. I see the advantage of having a consistent "production" strategy (rollbacks, deployments, etc) on every project but I'm still not sure if it's worth it even in this case.

The price tag

The article mentions the whole cluster is only 5$ for 3 micro servers, the same price as a single digital ocean droplet. It didn't mention though, that, of course, the specs of these two types of machines are completely different, so the comparison is not very helpful.

Did you get from the article what's the price for a single micro server in GCE? 0$. It's hard to beat that deal. And you will actually get the same performance than on the K8S cluster (or better, since you won't be running the k8s services).


I don't think I can personally recommend running a Kubernetes cluster specifically for this kind of workload. I don't think that Kubernetes was designed with this kind of use case in mind, anyway.

The advantages of Kubernetes really are visible when you are running different services, with different dependencies, on the same infrastructure, sharing a considerable number of servers. Developers get a nice and consistent API that they can use for any project, which I think is a great experience.

On top of that, in my opinion, a big advantage of Kubernetes for service developers is not having to deal with the infrastructure. For that, usually a different team is actually managing and developing the cluster, otherwise why not make service developers run their whole production environment anyway?

I think the key advantage for developers that use Kubernetes is the fact that they can build containers and use YAML files and the HTTP API and basically forget about all the infrastructure details. Maybe an actual service that provides this would be pretty good for developers doing this kind of small personal project. Maybe AWS Fargate? I will check it out at some point!

Do you want to do all of this because you think is fun? Or because you want to learn the technology? or just because? Please, be my guest! But really, would I do all of this just to run a personal project? No thanks.