By Fast Feedback Loop for Kubernetes Product Development in a Production Environment
Published by :ap
As DoorDash continues its rapid growth, product development must keep up the pace and move new features into production faster and with high reliability. Shipping features without previewing them in the production Kubernetes environment is a risky proposition and could slow down the product development process because any bugs or defects would send everything back to the drawing board. In a Kubernetes environment, developers must build, push, and deploy docker images in the cluster to preview their features before they are pushed to production. This previewing process was too slow for DoorDash’s needs, so the development productivity team had to find a way to build a faster feedback loop.
The availability of elastic cloud computing, scalable cloud storage, and Infrastructure as a Service (IaaS) from a variety of cloud providers, present unique opportunities for many companies looking to be competitive in a new age where software is conquering the world. However, the 24/7 nature of modern software systems which are expected to be highly-available, responsive, and infinitely scalable, present unique challenges to traditional development and operations teams.
With regard to these opportunities and challenges, companies tend to operate in one of two modes: innovation mode or firefighting mode. Innovation mode is when the development and operations teams are focused on creating new solutions and moving fast from ideation, to production, and generating business value. Firefighting mode is when the development and operations teams are not in sync (occasionally even hostile toward each other), projects are delayed due to constant interruptions of engineering teams, and a continuous stream of production issues and instabilities resulting in unsatisfied customers.
What Is Governance?
Governance refers to the ability of the operations team to verify and enforce certain policies and standards across the entire organization or within specific clusters. By reducing variations in the infrastructure you reduce your maintenance cost and attack surface. Also, having standardization enables automation of common tasks and improves efficiency at an organizational level. The policies and standards you want to enforce come from your organization’s established guidelines or agreed-upon conventions, and best practices within the industry. It could also be derived from tribal knowledge that has accumulated over the years within your operations and development teams.
Why Is Governance Important?
As a business, you want to focus on innovation that differentiates your business and generates value for your customers, rather than churning time and resources on maintaining your infrastructure. Basically, you want to be in innovation mode rather than firefighting mode. In a cloud-native ecosystem, decisions are decentralized and are tackled at a rapid pace. Having a governance framework will allow your company to move fast but at the same time minimize risk, control costs, and drive efficiency, transparency, and accountability across the organization.
There are three key dimensions that need to be defined in order to establish a governance framework for your organization:
Targets: the clusters, workloads, or entities where you want to apply governance
Policies: the rules or standards you want to validate against your specified targets
Triggers: the catalyst - when the policy should be checked (e.g., after git push, before Kubernetes deployment, every 24 hours, every time after an object spec changes in the cluster, etc.)
Recently, I configured Kubernetes secrets for a Rails app to be stored in git for deploying the app using the GitOps approach for one of our clients. This blog post is about the approach I followed to store the Kubernetes secrets for GitOps.
Two security vulnerabilities have been discovered in Kubernetes that can cause the denial of service attacks that can be recovered. Security researchers have found these issues in Kubernetes's Kubelet and API server modules. Issues have been rated as medium level.
By Platform9 announces new Managed Kubernetes (PMK) plans that start at zero cost and scale as customers grow
Published by :Jeremy
Since we launched our enterprise Platform9 Managed Kubernetes (PMK) three years ago, we learned a lot about real-world Kubernetes deployments. Large enterprise customers like Juniper have battle-tested our enterprise PMK product at scale running on hundreds of bare metal nodes across data centers. Many customers have benefited from our SaaS management capabilities, including automated deployments, upgrades, security patching, and SLA management.
In the last 12 months, we had several successful Kubernetes deployments around the world, highlighting enterprise momentum in Kubernetes and validating our industry-leading SaaS Management model.
Operating Kubernetes at scale is extremely challenging
However, we also found that a vast majority of companies are struggling with the complexity of operating Kubernetes in production. Kubernetes is complex and notoriously difficult to manage, particularly in on-premises or multi-cloud environments. Day 2 Operations are incredibly challenging: how do you handle upgrades to your clusters when there’s a new version or a security patch? How do you do the monitoring? HA? Scaling? Compliance? And more.
The operational pain is compounded by the industry-wide talent scarcity and skills gap. Most companies are struggling to hire the much sought-after Kubernetes experts, and they lack advanced Kubernetes experience to ensure smooth operations at scale.
Delivering production-grade Kubernetes in a way that doesn’t make your existing staff run for the hills (or get left holding the bag..) is tough.
Many companies are still learning the ropes with Kubernetes
Furthermore, not everybody is ready to go into production right away. For many companies, Kubernetes is still new, and they are kicking tires to figure out why, if, and when they want to use it. Companies want the room to start small, learn, test, and then scale to production on their terms.
Therefore, we decided to make our enterprise Kubernetes product more accessible across the board no matter where the customer is on their Kubernetes journey. We wanted DevOps teams and developers everywhere to enjoy the freedom of using Kubernetes at their own pace and in any environment of their choice so they can innovate for the business without having to deal with the day-to-day complexities of running Kubernetes in production.
Announcing new “Freedom” and “Growth” PMK plans
We are excited to announce today the launch of two new PMK plans (“Freedom’ and ‘Growth’) that allow DevOps, ITOps, Platform Engineering, and cloud architects to
Sign up online and instantly create upstream open-source Kubernetes clusters in under 5 minutes
Deploy clusters in any environment ranging from developer laptops, on-premises VM’s or bare metal servers to edge infrastructure or public clouds
Eliminate the constraints of Kubernetes skills, long implementation times, or management of day-2 operational activities such as upgrades, security patching, or monitoring, etc.
Gain the flexibility to start small, learn, test, and scale to production on their terms and pace.
Sign-up now to deploy your free cluster: https://www.platform9.com/signup
The Freedom plan is great for anyone getting started with Kubernetes and allows users to instantly install Kubernetes clusters of up to 20 nodes (800 vCPU’s).
The Growth plan starts under $500/month, including an option for month-to-month payments, and provides 99.9% SLA and 24×7 support for up to 50 nodes (2000 vCPU’s).
For more details on these pricing options, check out https://www.platform9.com/pricing
An extensive set of core features and support options unmatched anywhere
Before we dive into the details, first an important distinction:
Those of us who want to leverage Kubernetes in the enterprise know that words like “managed” and “service” (or “as-a-service”) are often thrown around with enterprise Kubernetes solutions. But they describe VERY different levels – and philosophies – of “management,” and of “service.”
What we mean is a fully-managed Kubernetes service, where Platform9 does all of the heavy lifting and ongoing operations. So you don’t have to deal with any of the operational complexity. Don’t mistake ‘managed service’ to mean a lot of people on keyboards manually managing your environment. Platform9 delivers a public-cloud like service in on-premises, edge, and multi-cloud environments. This service is provided using a SaaS delivery model, developed with thousands of person-years of software automation engineering work. Moreover, the service is backed by our additional layer of Kubernetes certified experts and customer success teams who also monitor and remediate the environment.
Both the Freedom and Growth plans use the same battle-tested and proven enterprise edition of Platform9 Managed Kubernetes (PMK) and provide the following set of core capabilities:
A SaaS Management Plane that remotely monitors, optimizes and heals your clusters and underlying infrastructure, across all of your environments
Self-service, instant cluster creation (under 5 minutes) with native integrations across private and public clouds
1-click, in-place cluster upgrades to the latest version of Kubernetes
Automatic security patches – when a new CVE is discovered and fixed, a patch is automatically applied to all clusters
Built-in monitoring and alerts to ensure cluster health, including etcd cluster quorum lost, etcd node down, etcd repair failure, infrastructure resource utilization, node storage issues, network connectivity between nodes, docker daemon down, and more
Managed Observability (Prometheus and more) is included by default. Users can configure these tools for their specific needs for each cluster (connect to a different persistent storage or data visualization tool, etc. Grafana dashboard is integrated by default.)
Centrally manage all clusters from a single pane of glass
Control access to resources with fine-grained Kubernetes RBAC management
And much, much more.
For a more detailed list of capabilities and comparison of these plans, Check out: https://www.platform9.com/pricing/comparison
We believe these plans will make Kubernetes a no-brainer for DevOps, ITOps, and cloud platform teams in any company no matter how large or small, no matter where they are in their Kubernetes journey- providing everyone with a superior experience of the Kubernetes services on their infrastructure (on-prem, or in the cloud or at the edge)
Making it easier to migrate your apps into Kubernetes: Partnership with HyScale
Getting stable Kubernetes clusters deployed and operational is something that most DevOps and ITOps struggle with, but what about containerization your existing complex apps? This migration can be a long and complicated endeavor in and of itself, which can further be hampered by all the new Kubernetes concepts that developers need to learn. How can we simplify this process?
HyScale is an application delivery platform that abstracts the complexities of containers and Kubernetes so that your application teams can quickly deliver containers and IT teams get to drive-up Kubernetes adoption.
We have partnered with HyScale to help our customers accelerate Kubernetes adoption and get developers excited about containerization and moving their apps to Kubernetes. Read this blog for more details on this partnership, including step-by-step instructions on how you can get your apps migrated over to PMK.
Managing your container images in a private registry, for FREE: Partnership with JFrog
If you have used or heard of Artifactory, then you know JFrog. JFrog has recently introduced the JFrog Container Registry, which is the most comprehensive and advanced container registry in the market today, and it is available for free.
The need for a private registry to store and manage this software is vital whether you are producing containerized software or merely running it. A private registry can protect you from upstream changes, network failures, and generally from third-party sources you have no control over. If you are producing images, you need a private registry to version your software, track its dependencies, and allow for reproducible builds. This is where JFrog’s Container Registry (JCR) comes in. It’s easy to deploy JCR on top of PMK and get a free registry running on a free PMK cluster anywhere.
So what are you waiting for? Give our “Freedom” plan a spin. It’s free forever, no credit card required. Really!
Here’s the link again: https://www.platform9.com/signup/
You can get going with a single node Kubernetes on your laptop. Here are the step-by-step tutorials for deploying on your laptop:
Deploy on Apple macOS with VirtualBox
Deploy on Windows OS with Virtual Box
Once you have deployed your cluster, here are more tutorials for you get started with container applications:
Setup your NGINX Ingress Controller
Get your first container up and running
Deploy a complex microservices app
Kubernetes add many enhancements and feature sets for edge-based network infrastructure.
Streamlines workloads and resource management using policy based scheduling.
Adds security and networking features.
Enables auto-scaling and traffic shaping for better resource utilization and workload prioritization.
Apart from Kubernetes Edge IoT working group community, there are key developments are in progress by many companies to integrate and utilize Kubernetes power for edge and IoT. I will cover more details in upcoming articles about Kubernetes for edge.
Kubernetes 1.14 consists of 31 enhancements: 10 moving to stable, 12 in beta, and 7 net new. The main themes of this release are extensibility and supporting more workloads on Kubernetes with three major features moving to general availability, and an important security feature moving to beta.
More enhancements graduated to stable in this release than any prior Kubernetes release. This represents an important milestone for users and operators in terms of setting support expectations. In addition, there are notable Pod and RBAC enhancements in this release, which are discussed in the “additional notable features” section below.
The technology world is looking for flexible IT infrastructure that will easily evolve to meet changing data and performance requirements in support of the onslaught of upcoming and lucrative use cases. Kmesh addresses data management and data sovereignty concerns while decreasing costs associated with storage and network resources.
Kubernetes works on the principle of assigning IP addresses to pods, called as “IP-per-pod” model. The IPAM (IP address management) task is left to third party solutions. Some of these solutions include Docker networking, Flannel, IPvlan, contive, OpenVswitch, GCE and others.
The Kubernetes architecture consists of master node, replication controller in addition (or conjunction) to nodes used to host the pods. Before we go ahead, here is a review of Kubernetes terms.
In this blog post we will deploy OpenFaaS – Serverless Functions Made Simple for Kubernetes – on AWS using Amazon Elastic Container Service for Kubernetes (Amazon EKS). We will start by installing CLIs to manage EKS, Kubernetes, and Helm, and then move on to deploy OpenFaaS using its Helm chart repo.
Linkerd 2.0 was recently announced as generally available (GA), signaling its readiness for production use. In this tutorial, we’ll walk you through how to get Linkerd 2.0 up and running on your Kubernetes cluster in a matter seconds.