Kubernetes supports the concept of ‘impersonation’ and we’re going to look at the user & group configuration that we created using impersonation to enable a least-privilege type of access to the cluster, even as an administrator, to ensure that it was more difficult to accidentally perform unwanted actions, while keeping the complexity level low.
As edge computing continues to gain momentum to deal with streaming data generated by numerous IoT devices, there are several challenges that have shown-up associated with remote management of software deployment and updates; latency, pre-processing of data, orchestration of different workloads and end-to-end orchestration of compute resources. Kubernetes has emerged as a perfect solution to service providers and enterprises who want to or have deployed edge nodes. It brings cloud native approach to edge use cases along with large feature sets for public cloud, private cloud and core datacenters.
This ebook focuses on current scenarios of adoption of Kubernetes for edge use cases, current Kubernetes + edge case studies, approached of deployment and open source and commercial solutions.
Running large Kubernetes clusters serving high volumes of traffic (thousands of nodes serving thousands of requests/second) requires tackling scaling challenges in both the control plane and data plane. This talk will present options that allow for performant networking when the number of nodes, services, endpoints and traffic grow in your Kubernetes cluster. Laurent and Manjot will cover how to use CNI plugins for efficient routing by not requiring overlays, how kube-proxy can be configured to handle clusters with thousands of services and endpoint and how ingress controllers can route traffic directly to pods without requiring nodeports. In addition, many of these solutions are at an early stage and the talk will dive into the issues faced and how they were addressed. Finally, the talk will discuss upcoming technologies that will allow Kubernetes to scale even further.
At Namely we’ve been running with Istio for a year now. Yes, that’s pretty much when it first came out. We had a major performance regression with a Kubernetes cluster, we wanted distributed tracing, and used Istio to bootstrap Jaeger to investigate. We immediately saw the potential of a service mesh as it relates to our infrastructure and decided to make an investment in the tool.
The first release of Kubernetes in 2019 brings a highly anticipated feature - production-level support for Windows workloads. Up until now Windows node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. While in beta, developers in the Kubernetes community and Windows Server team worked together to improve the container runtime, build a continuous testing process, and complete features needed for a good user experience. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform.
The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production use. GA features are protected by the Kubernetes deprecation policy.
In a best-practice Kubernetes cluster every request to the Kubernetes APIServer is authenticated and authorized. Authorization is usually implemented by the RBAC authorization module. But there are alternatives and this blog post explains how to implement advanced authorization policies via Open Policy Agent (OPA) by leveraging the Webhook authorization module.
kubeadm is a tool that enables Kubernetes administrators to quickly and easily bootstrap minimum viable clusters that are fully compliant with Certified Kubernetes guidelines. It’s been under active development by SIG Cluster Lifecycle since 2016 and we’re excited to announce that it has now graduated from beta to stable and generally available (GA)!
Early on Monday December 3rd, a boulder splashed into the placidly silent Kubernetes security channels. A potentially high severity authentication bypass was disclosed with scant explanation the same day that K8s version 1.13 went golden master. For Kubernetes administrators with PTSD from 2014’s HeartBleed, the CVE blast and its 37-line fix triggered palpitations in anticipation of sleepless patchfests to come.
Kubernetes is a great orchestator for containers. But it does not manage network for Pod-to-Pod communication. This is the mission of Container Network Interfaces (CNI) plugins which are a standardized way to achieve network abstraction for container clustering tools (Kubernetes, Mesos, OpenShift, etc.)
So, finally decided to secure your Helm installation? That`s great! Sounds easy enough, right? And as an extra, all sources seem to be telling the exact same story: “All you have to do is follow these steps, and you are good to go”. Right? Wrong. Well, not really wrong, it is a fairly simple procedure, but brace yourself for more typing. Much more. A bit more. There you are. But have no fear, the solution is here (in this post, if that was unclear).
Rook is designed to run as a native Kubernetes service – it scales along side your apps.
Rook offers storage for your Kubernetes app through persistent volumes.
Rook takes advantage of many benefits of the platform, such as streamlined resource management, health checks, failover, upgrades, and networking, to name just a few.