Kubernetes: Two Years In, Myths, and the Complexity Conundrum

It’s been two years since I stepped into the Kubernetes world, and during that time, I’ve been labeled the “Kubernetes Engineer” among friends. It’s no surprise that people either have zero knowledge about K8s or simply recognize the name, usually followed by “that overly complex system.”

As I hit this two-year milestone, I wanted to share my reflections on working in this field. This isn’t specific to GKE but more about Kubernetes as a whole. I’m still a relative newbie, but here are some stereotypes, myths, and pain points I’ve encountered:

The Customization Conundrum: Complexity is King

Kubernetes was designed to orchestrate containerized workloads, scaling them and allocating resources seamlessly. But over its ten years, Kubernetes has evolved into a giant, complex beast. Years of user requests and feature additions have made it adaptable to various scenarios, but at the cost of simplicity.

One example that suddenly comes to mind is container types. Apparently, there are two types of containers in nature. InitContainer is for the setup work of a pod before the main container runs. The main container is just the workloads that are containerized to be run to serve users’ needs. And after that, Ephemeral containers and sidecar containers (a special type of initContainer) were introduced for some special needs. These changes significantly increase K8s container lifecycle’s customization, but at what cost? I guess that the pod lifecycle becomes more complex, and new features will only serve some complex cases, which will confuse “orthodoxy” users.

The Dream Gap: Kubernetes for the Big Leagues

Kubernetes are so hard for small groups and single users. Even miniKube is way too complex! I believe everyone who touches Kubernetes, no matter newcomers or experts, is complaining about using YAML to write all workloads. I’ve seen some YAML with thousands of lines of code that still does not include custom resources. Honestly, even writing the spec YAML files will push back all tasters.

Since the K8s were originally designed for orchestration problems targeting large systems, we shouldn’t complain that only big companies are using them. However, the story changed when time flew by. For all K8s users, the existing system already satisfied most of their basic needs, but situations are totally different for bigger companies, they always have some new requirements and lead the development of the K8s OSS. This turns ugly when different cloud providers start a differential competition. Due to lower-level compute resource differences, more and more optimizations for K8s are becoming secrets. Companies started to keep their own code base and only merge necessary codes upstream. This makes the system hard to understand, not even mentioned as clean and elegant.

The Infrastructure Investment Illusion

Most Kubernetes users who rely on cloud providers have the same strange thinking that they can cut off investment in infrastructure. I feel the logic behind this is that users always believe that all orchestration will be done by a cloud provider, they should save time and resources by maintaining their own infrastructure. This turns into completely wrong thinking. All top customers use Kubernetes Engineers provided by the cloud provider but also keep their own tech stack and experts. Sometimes I also don’t understand why they don’t build on top of the cloud provider’s bare-metal machines, since they have all the experts and knowledge on K8s.

But things are completely different for some large-cap but non-tech companies. Instead of infra experts, they relied on hiring “expert K8s users.” They can definitely help with deploying workloads and doing some simple debugging. However, once a major issue impacts them, they can do nothing but wait for cloud providers to support them. To their mindset, using Kubernetes is investment-transferring, we paid slightly more rather than buy down the cloud VMs, but instead, they asked the cloud provider to support them. Sadly, even cloud providers can’t provide better support, and I observed the trend that those large-cap non-tech companies started to hire K8s developers to counter this. Overall, it still becomes costly.


These are my reflections after two years. With deeper understanding, I’m sure my perspective will evolve. As I spend more time in this domain, I hope to share insights on K8s’ evolution and projects that might bridge the gap towards a more accessible Kubernetes.

Also, please share your thought with me.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *