Kubernetes is the world’s most popular container orchestration tool. This open-source system provides a common platform for deploying containerized applications across hybrid-cloud environments. And if you choose a managed version like Google Kubernetes Engine (GKE), you don’t even need to configure the cluster itself—the cloud provider does it for you
Recently, the Kubernetes community has started to add support for running large stateful applications such as databases, analytics and machine learning. For example, you can use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart. If your workload depends on local storage, you can use PersistentVolumes with Local SSDs, and you can also use SSD persistent disk as boot disk for improved performance for different kinds of workloads.
In the past few years, developers have moved en masse to containers for their ease-of-use, portability and performance. Today, we’re excited to announce that Google Cloud Platform (GCP) now offers container-native load balancing for applications running on Google Kubernetes Engine (GKE) and Kubernetes on Compute Engine, reaffirming containers as first-class citizens on GCP.
In the four years since we launched Google Kubernetes Engine (GKE), it has become a fast favorite among enterprises running large container-based applications in production. But even the biggest, most mission-critical system was once just a prototype, and even the largest organization was once just a startup.