Understanding Karpenter in AWS: The Game-Changing Kubernetes Autoscaler

Itskmyoo
3 min readDec 28, 2024

--

random cover photo, somehow related to blog | why not!

Kubernetes has revolutionized the way applications are deployed and scaled, but managing infrastructure dynamically and cost-effectively has remained a challenge. Enter Karpenter, AWS’s open-source Kubernetes cluster autoscaler that ensures optimal resource utilization and workload scaling. Here’s why Karpenter is a tool every cloud engineer should be aware of.

What is Karpenter?

Karpenter is a high-performance Kubernetes cluster autoscaler developed by AWS. Unlike traditional node scaling solutions, Karpenter provides:

  • Dynamic Node Provisioning: It launches nodes optimized for your workloads in seconds.
  • Cost Efficiency: By analyzing workload requirements and using spot or on-demand instances dynamically.
  • Enhanced Performance: Karpenter ensures minimal scheduling latency and optimal workload distribution.
  • Flexibility: It supports multi-cloud and on-premises setups in addition to AWS.

Why Use Karpenter?

  • Reduced Over-Provisioning: Traditional cluster autoscalers often result in over-provisioning. Karpenter only provisions what your workloads need.
  • Rapid Scaling: Karpenter provisions nodes faster than traditional solutions, adapting seamlessly to sudden traffic spikes.
  • Spot Instance Optimization: It intelligently uses spot instances to reduce costs while maintaining reliability.

Key Use Case: Autoscaling for E-commerce Platforms

Consider an e-commerce platform experiencing traffic spikes during sales events. With Karpenter, you can:

  1. Dynamically provision additional nodes to handle increased traffic.
  2. Optimize costs using spot instances during non-peak hours.
  3. Ensure high availability and performance by auto-healing failed nodes.

Setting Up Karpenter on AWS

Prerequisites

  • An AWS account with IAM permissions for EC2 and EKS.
  • An existing Kubernetes cluster on Amazon EKS.
  • The AWS CLI and kubectl configured on your local machine.

Step 1: Install Karpenter Helm Chart

We are going to follow the official document from karpenter that is more than enough to setup it in environment.

helm repo add karpenter https://charts.karpenter.sh
helm repo update
helm upgrade --install karpenter karpenter/karpenter \
--namespace karpenter \
--create-namespace \
--set serviceAccount.annotations."eks.amazonaws.com/role-arn"=<YOUR_IAM_ROLE> \
--set settings.aws.clusterName=<EKS_CLUSTER_NAME> \
--set settings.aws.defaultInstanceProfile=<INSTANCE_PROFILE>

Step 2: Configure EC2NodeClass and NodePools

Define EC2NodeClass

Create a YAML file (ec2-node-class.yaml) to define EC2 instance configurations:

apiVersion: karpenter.k8s.aws/v1alpha1
kind: EC2NodeClass
metadata:
name: default-ec2-nodeclass
spec:
amiSelector:
karpenter.sh/discovery: "<EKS_CLUSTER_NAME>"
subnetSelector:
karpenter.sh/discovery: "<EKS_CLUSTER_NAME>"
securityGroupSelector:
karpenter.sh/discovery: "<EKS_CLUSTER_NAME>"

Apply the EC2NodeClass:

kubectl apply -f ec2-node-class.yaml

Define NodePool

Create a YAML file (nodepool.yaml) to define node scaling policies:

apiVersion: karpenter.sh/v1alpha5
kind: NodePool
metadata:
name: default-nodepool
spec:
template:
spec:
nodeClass:
kind: EC2NodeClass
name: default-ec2-nodeclass
resources:
limits:
cpu: "2000"
ttlSecondsAfterEmpty: 60

Apply the NodePool:

kubectl apply -f nodepool.yaml

Real-Life Implementation Code

Sample Application Deployment

Deploy a sample application to see Karpenter in action.

Step 1: Deploy a Sample Workload

apiVersion: apps/v1
kind: Deployment
metadata:
name: karpenter-sample
spec:
replicas: 100
selector:
matchLabels:
app: karpenter-sample
template:
metadata:
labels:
app: karpenter-sample
spec:
containers:
- name: sample-app
image: public.ecr.aws/nginx/nginx:latest
resources:
requests:
cpu: "100m"
memory: "128Mi"

Apply the workload:

kubectl apply -f sample-app.yaml

Step 2: Monitor Scaling

Watch the nodes scale dynamically to accommodate the workload:

kubectl get pods -o wide
kubectl get nodes

Conclusion

Karpenter simplifies Kubernetes cluster scaling with an emphasis on performance and cost optimization. By dynamically provisioning resources based on workload demands, it ensures that your infrastructure adapts to changes efficiently.

Whether you’re running high-traffic applications or optimizing costs with spot instances, Karpenter is a must-know tool for Kubernetes engineers. Start leveraging it today to unlock the full potential of your Kubernetes clusters.

For more details, visit the Karpenter GitHub Repository.tn

--

--

Itskmyoo
Itskmyoo

Written by Itskmyoo

Entrepreneur - Visionary - Survivor

No responses yet