Containers

Tag: Karpenter

Life360’s journey to a multi-cluster Amazon EKS architecture to improve resiliency

This post was coauthored by Jesse Gonzalez, Sr. Staff Site Reliability and Naveen Puvvula, Sr. Eng Manager, Reliability Engineering at Life360 Introduction Life360 offers advanced driving, digital, and location safety features and location sharing for the entire family. Since its launch in 2008, it has become an essential solution for modern life around the world, […]

How Sentra manages data workflows using Amazon EKS, Dagster, and Karpenter to maximize cost-efficiency with minimal operational overhead

By Yael Grossman Sr Compute Specialist Solutions Architect at AWS, Roei Jacobovich Software Engineer at Sentra Introduction In this post, we’ll illustrate how Sentra utilizes Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate , EC2 Spot, Karpenter, and an open-source version of Dagster, a cloud-native orchestrator, to run efficient and scalable data processing workloads on […]

Scaling Kubernetes with Karpenter: Advanced Scheduling with Pod Affinity and Volume Topology Awareness

Scaling Kubernetes with Karpenter: Advanced Scheduling with Pod Affinity and Volume Topology Awareness

This post was co-written by Lukonde Mwila, Principal Technical Evangelist at SUSE, an AWS Container Hero, and a HashiCorp Ambassador. Introduction Cloud-native technologies are becoming increasingly ubiquitous, and Kubernetes is at the forefront of this movement. Today, Kubernetes is seeing widespread adoption across organizations in a variety of different industries. When implemented properly, Kubernetes can […]

Diagram of Karpenter pods

Managing Pod Scheduling Constraints and Groupless Node Upgrades with Karpenter in Amazon EKS

Feb 2024: This blog has been updated for Karpenter version v0.33.1 and v1beta1 specification. About Karpenter Karpenter is an open-source node lifecycle management project built for Kubernetes. It observes the aggregate resource requests of unschedulable pods and makes decisions to launch new nodes and terminate them to reduce scheduling latencies and infrastructure costs sending commands to […]