Kubernetes livenessProbe can be dangerous. I recommend to avoid them unless you have a clear use case and understand the consequences. This post looks at both Liveness and Readiness Probes and describes some "DOs" and "DON'Ts"
Understanding, controlling and securing your external service access is one of the key benefits that you get from a service mesh like Istio. From a security and operations point of view, it is critical to monitor what external service traffic is getting blocked as they might surface possible misconfigurations or a security vulnerability if an application is attempting to communicate with a service that it should not be allowed to. Similarly, if you currently have a policy of allowing any external service access, it is beneficial to monitor the traffic so you can incrementally add explicit Istio configuration to allow access and better security your cluster. In either case, having visibility into this traffic via telemetry is quite helpful as it enables you to create alerts and dashboards, and better reason about your security posture. This was a highly requested feature by production users of Istio and we are excited that the support for this was added in release 1.3.
Setting Kubernetes requests and limits effectively has a major impact on application performance, stability, and cost. And yet working with many teams over the past year has shown us that determining the right values for these parameters is hard. For this reason, we have created this short guide and are launching a new product to help teams more accurately set Kubernetes requests and limits for their applications.
In this post, we’ll be outlining how to easily upgrade Istio control planes to 1.3 with the Banzai Cloud Istio operator, within a single-mesh multi-cluster topology or across a multi-cloud or hybrid-cloud service mesh.
If you have used Kubernetes for any length of time, you will have heard the term Service Mesh. Several big companies are backing service mesh projects, such as Google with Istio and the Cloud Native Computing Foundation with Linkerd.
So what is a Service Mesh, and how is it different from the standard Service and Ingress resources native to Kubernetes?
Kubernetes supports the concept of ‘impersonation’ and we’re going to look at the user & group configuration that we created using impersonation to enable a least-privilege type of access to the cluster, even as an administrator, to ensure that it was more difficult to accidentally perform unwanted actions, while keeping the complexity level low.
As edge computing continues to gain momentum to deal with streaming data generated by numerous IoT devices, there are several challenges that have shown-up associated with remote management of software deployment and updates; latency, pre-processing of data, orchestration of different workloads and end-to-end orchestration of compute resources. Kubernetes has emerged as a perfect solution to service providers and enterprises who want to or have deployed edge nodes. It brings cloud native approach to edge use cases along with large feature sets for public cloud, private cloud and core datacenters.
This ebook focuses on current scenarios of adoption of Kubernetes for edge use cases, current Kubernetes + edge case studies, approached of deployment and open source and commercial solutions.
Running large Kubernetes clusters serving high volumes of traffic (thousands of nodes serving thousands of requests/second) requires tackling scaling challenges in both the control plane and data plane. This talk will present options that allow for performant networking when the number of nodes, services, endpoints and traffic grow in your Kubernetes cluster. Laurent and Manjot will cover how to use CNI plugins for efficient routing by not requiring overlays, how kube-proxy can be configured to handle clusters with thousands of services and endpoint and how ingress controllers can route traffic directly to pods without requiring nodeports. In addition, many of these solutions are at an early stage and the talk will dive into the issues faced and how they were addressed. Finally, the talk will discuss upcoming technologies that will allow Kubernetes to scale even further.
At Namely we’ve been running with Istio for a year now. Yes, that’s pretty much when it first came out. We had a major performance regression with a Kubernetes cluster, we wanted distributed tracing, and used Istio to bootstrap Jaeger to investigate. We immediately saw the potential of a service mesh as it relates to our infrastructure and decided to make an investment in the tool.
The first release of Kubernetes in 2019 brings a highly anticipated feature - production-level support for Windows workloads. Up until now Windows node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. While in beta, developers in the Kubernetes community and Windows Server team worked together to improve the container runtime, build a continuous testing process, and complete features needed for a good user experience. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform.
The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production use. GA features are protected by the Kubernetes deprecation policy.