This Helm chart deploys Karpenter Optimizer on a Kubernetes cluster. Karpenter Optimizer analyzes your cluster usage and provides cost-optimized NodePool recommendations.
helm repo add karpenter-optimizer https://charts.karpenter-optimizer.io
helm repo update
helm install karpenter-optimizer karpenter-optimizer/karpenter-optimizer \
--namespace karpenter-optimizer \
--create-namespace
With Ollama (Legacy):
helm install karpenter-optimizer karpenter-optimizer/karpenter-optimizer \
--namespace karpenter-optimizer \
--create-namespace \
--set config.ollama.enabled=true \
--set config.ollama.url=http://ollama:11434 \
--set config.ollama.model=granite4:latest
With IRSA (IAM Roles for Service Accounts) on EKS:
helm install karpenter-optimizer karpenter-optimizer/karpenter-optimizer \
--namespace karpenter-optimizer \
--create-namespace \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT_ID:role/karpenter-optimizer-role \
--set config.aws.region=us-east-1
See IRSA Setup Guide for detailed instructions.
With LiteLLM:
helm install karpenter-optimizer karpenter-optimizer/karpenter-optimizer \
--namespace karpenter-optimizer \
--create-namespace \
--set config.llm.enabled=true \
--set config.llm.provider=litellm \
--set config.llm.url=http://litellm-service:4000 \
--set config.llm.model=gpt-3.5-turbo \
--set config.llm.apiKey=your-api-key
Karpenter Optimizer supports multiple LLM providers for AI-enhanced explanations:
LiteLLM provides a unified interface to multiple LLM providers (OpenAI, Anthropic, Azure OpenAI, etc.):
config:
llm:
enabled: true
provider: "litellm"
url: "http://litellm-service:4000"
model: "gpt-3.5-turbo"
apiKey: "your-api-key" # Optional, if LiteLLM requires authentication
Ollama is still supported for backward compatibility:
config:
ollama:
enabled: true
url: "http://ollama-service:11434"
model: "granite4:latest"
Or use the new unified LLM configuration:
config:
llm:
enabled: true
provider: "ollama"
url: "http://ollama-service:11434"
model: "granite4:latest"
The following table lists the configurable parameters and their default values:
| Parameter | Description | Default |
|---|---|---|
replicaCount |
Number of replicas | 1 |
image.repository |
Image repository | ghcr.io/kaskol10/karpenter-optimizer |
image.tag |
Image tag | "" (uses appVersion) |
image.pullPolicy |
Image pull policy | IfNotPresent |
service.type |
Service type | ClusterIP |
service.port |
Service port | 8080 |
ingress.enabled |
Enable ingress | false |
config.llm.enabled |
Enable LLM integration | false |
config.llm.provider |
LLM provider (ollama or litellm) |
ollama |
config.llm.url |
LLM service URL | "" |
config.llm.model |
Model name | "" |
config.llm.apiKey |
API key (for LiteLLM) | "" |
config.ollama.enabled |
Enable Ollama integration (legacy) | false |
config.ollama.url |
Ollama URL (legacy) | "" |
config.ollama.model |
Ollama model (legacy) | granite4:latest |
config.aws.region |
AWS region | us-east-1 |
resources.limits.cpu |
CPU limit | 500m |
resources.limits.memory |
Memory limit | 512Mi |
resources.requests.cpu |
CPU request | 100m |
resources.requests.memory |
Memory request | 128Mi |
See values.yaml for all available configuration options.
helm uninstall karpenter-optimizer --namespace karpenter-optimizer
helm repo update
helm upgrade karpenter-optimizer karpenter-optimizer/karpenter-optimizer \
--namespace karpenter-optimizer
kubectl get pods -n karpenter-optimizer
kubectl logs -n karpenter-optimizer -l app.kubernetes.io/name=karpenter-optimizer
kubectl get svc -n karpenter-optimizer
The chart creates a ClusterRole and ClusterRoleBinding with the following permissions:
For issues and questions: