So, finally decided to secure your Helm installation? That`s great! Sounds easy enough, right? And as an extra, all sources seem to be telling the exact same story: “All you have to do is follow these steps, and you are good to go”. Right? Wrong. Well, not really wrong, it is a fairly simple procedure, but brace yourself for more typing. Much more. A bit more. There you are. But have no fear, the solution is here (in this post, if that was unclear).
According to the docs, in Kubernetes, ConfigMap resources “allow you to decouple configuration artifacts from image content to keep containerized applications portable.” Used with Kubernetes pods, configmaps can be used to dynamically add or change files used by containers.
In Kubernetes, pods are the smallest deployable units of computing that can be created and managed. A pod is a group of one or more containers (Docker, rocket, etc), with shared storage/network, and a specification for how to run the containers.
When learning a new technology like Istio, it’s always a good idea to take a look at sample apps. Istio repo has a few sample apps but they fall short in various ways. BookInfo is covered in the docs and it is a good first step. However, it is too verbose with too many services for me and the docs seem to focus on managing the BookInfo app, rather than building it from ground up. There’s a smaller helloworld sample but it’s more about autoscaling than anything else.
Rate-limiting is an effective and simple way to mitigate cascading failure and shared resource exhaustion. Envoy is a feature rich proxy which allows for the easy addition of rate limiting, to any service. This post walks through configuring envoy to enforce rate limiting without changing any application level configuration.
This is part 4 of a multipart series which covers the programmability of the Kubernetes API using the official clients. This post covers the use of the Kubernetes Go client, or client-go, to implement a simple PVC watch tool which has been implemented in Java and Python in my previous posts.
Deploying an application is traditionally the most challenging part of the software delivery process. No two machines are the same, the guy who usually does the deployments is on vacation, and risk of disrupting production is ever looming. Without proper automation and safety checks, it can be a very daunting process.
One of my bug bears about Kubernetes (K8s) and even with the managed versions such as GKE is that you still need to twist way too many knobs and are into the weeds way too quickly , so much so I am pretty sure my team mates are tired of me saying “ The old me would love K8s (managed version or not ) “ as coming from an operational background it was designed to keep the old me happy !