Production Deployment with Helm
Learn how to deploy containerized applications to Kubernetes using Helm charts. Set up production monitoring, implement GitOps workflows, and configure autoscaling and high availability.
Overview #
In this final step, you’ll deploy your containerized application to Kubernetes using Helm charts. You’ll implement production-ready features including monitoring, logging, autoscaling, and continuous deployment workflows.
Step 1: Create Helm Chart Structure #
Initialize Helm Chart #
# Create a new Helm chart
helm create cloud-native-app-chart
cd cloud-native-app-chart
# Remove example files
rm -rf templates/tests/
rm templates/NOTES.txtCustomize Chart.yaml #
Edit Chart.yaml:
apiVersion: v2
name: cloud-native-app
description: A production-ready cloud-native web application
type: application
version: 1.0.0
appVersion: "1.0.0"
keywords:
- web
- nodejs
- cloud-native
home: https://example.com/cloud-native-app
sources:
- https://github.com/example/cloud-native-app
maintainers:
- name: Your Name
email: your.email@example.com
dependencies:
- name: redis
version: "17.3.7"
repository: "https://charts.bitnami.com/bitnami"
condition: redis.enabledStep 2: Configure Application Values #
Edit values.yaml:
# Application configuration
app:
name: cloud-native-app
version: "1.0.0"
# Container image configuration
image:
repository: cloud-native-app
tag: "v1.0.0"
pullPolicy: IfNotPresent
# Deployment configuration
replicaCount: 3
# Resource limits and requests
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
# Service configuration
service:
type: ClusterIP
port: 3000
targetPort: 3000
# Ingress configuration
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
hosts:
- host: cloud-native-app.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: cloud-native-app-tls
hosts:
- cloud-native-app.example.com
# Horizontal Pod Autoscaler
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
# Pod Disruption Budget
podDisruptionBudget:
enabled: true
minAvailable: 2
# Health checks
healthcheck:
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Environment variables
env:
NODE_ENV: production
PORT: "3000"
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
readOnlyRootFilesystem: true
# Redis dependency
redis:
enabled: true
auth:
enabled: false
architecture: standalone
master:
persistence:
enabled: false
# Monitoring and observability
monitoring:
enabled: true
serviceMonitor:
enabled: true
interval: 30s
path: /metrics
# Logging
logging:
level: info
format: jsonStep 3: Create Kubernetes Manifests #
Deployment Template #
Create templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "cloud-native-app.fullname" . }}
labels:
{{- include "cloud-native-app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "cloud-native-app.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
{{- include "cloud-native-app.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
livenessProbe:
{{- toYaml .Values.healthcheck.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.healthcheck.readinessProbe | nindent 12 }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-cache
mountPath: /var/cache
volumes:
- name: tmp
emptyDir: {}
- name: var-cache
emptyDir: {}Service Template #
Create templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "cloud-native-app.fullname" . }}
labels:
{{- include "cloud-native-app.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: http
selector:
{{- include "cloud-native-app.selectorLabels" . | nindent 4 }}HPA Template #
Create templates/hpa.yaml:
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "cloud-native-app.fullname" . }}
labels:
{{- include "cloud-native-app.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "cloud-native-app.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}Step 4: Deploy to Kubernetes #
Install Dependencies #
# Add Bitnami repository for Redis
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Update chart dependencies
helm dependency updateDeploy to Development #
# Create development namespace
kubectl create namespace development
# Deploy to development environment
helm upgrade --install cloud-native-app . \
--namespace development \
--set image.tag=v1.0.0 \
--set ingress.hosts[0].host=dev.cloud-native-app.local \
--set replicaCount=1 \
--set resources.requests.cpu=100m \
--set resources.requests.memory=128MiDeploy to Production #
# Create production namespace
kubectl create namespace production
# Deploy to production with production values
helm upgrade --install cloud-native-app . \
--namespace production \
--values values-production.yaml \
--waitStep 5: Set Up Monitoring and Observability #
Install Prometheus and Grafana #
# Add Prometheus Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install Prometheus stack
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set grafana.enabled=true \
--set grafana.adminPassword=admin123Create ServiceMonitor #
Create templates/servicemonitor.yaml:
{{- if and .Values.monitoring.enabled .Values.monitoring.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "cloud-native-app.fullname" . }}
labels:
{{- include "cloud-native-app.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "cloud-native-app.selectorLabels" . | nindent 6 }}
endpoints:
- port: http
interval: {{ .Values.monitoring.serviceMonitor.interval }}
path: {{ .Values.monitoring.serviceMonitor.path }}
{{- end }}Step 6: Implement GitOps with ArgoCD #
Install ArgoCD #
# Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Get admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dCreate Application Manifest #
Create argocd-application.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cloud-native-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/cloud-native-app-chart
targetRevision: HEAD
path: .
helm:
valueFiles:
- values-production.yaml
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueStep 7: Production Verification #
Health Check Commands #
# Check deployment status
kubectl get deployments -n production
kubectl get pods -n production
kubectl get services -n production
# Check HPA status
kubectl get hpa -n production
# Check application logs
kubectl logs -f deployment/cloud-native-app -n production
# Port forward for local testing
kubectl port-forward service/cloud-native-app 8080:3000 -n productionLoad Testing #
# Install hey for load testing
go install github.com/rakyll/hey@latest
# Run load test
hey -n 10000 -c 100 http://localhost:8080/
# Monitor HPA during load test
watch kubectl get hpa -n productionMonitoring Dashboards #
Access Grafana dashboard:
# Port forward Grafana
kubectl port-forward service/prometheus-grafana 3000:80 -n monitoring
# Login with admin/admin123
# Import dashboard ID: 315 (Kubernetes cluster monitoring)Step 8: Production Best Practices #
Security Checklist #
- Network policies implemented to restrict pod communication
- RBAC configured with least privilege access
- Pod security policies or Pod security standards enforced
- Secrets stored in Kubernetes secrets or external secret management
- Image scanning integrated into CI/CD pipeline
- TLS certificates automatically managed with cert-manager
Operational Checklist #
- Backup strategy implemented for persistent data
- Disaster recovery procedures documented and tested
- Monitoring and alerting configured for critical metrics
- Log aggregation set up with centralized logging
- Performance testing automated in CI/CD pipeline
- Documentation updated and accessible to the team
Performance Optimization #
# Check resource utilization
kubectl top pods -n production
kubectl top nodes
# Analyze network policies
kubectl get networkpolicies -n production
# Review resource quotas
kubectl describe resourcequota -n productionConclusion #
Congratulations! You’ve successfully completed the cloud-native development tutorial. You now have:
✅ Development Environment: Docker, Kubernetes, and essential tools configured
✅ Containerized Application: Production-ready container with security best practices
✅ Production Deployment: Kubernetes deployment with Helm charts
✅ Monitoring & Observability: Prometheus, Grafana, and application metrics
✅ GitOps Workflow: ArgoCD for automated deployment management
✅ Autoscaling & HA: Horizontal pod autoscaling and high availability
Next Steps #
To continue your cloud-native journey, consider exploring:
- Service mesh technologies like Istio or Linkerd
- Advanced security with tools like Falco and OPA Gatekeeper
- Multi-cluster management with tools like Rancher or Crossplane
- Chaos engineering with tools like Chaos Monkey or Litmus
- Cost optimization strategies and FinOps practices
Resources #
- Kubernetes Documentation
- Helm Documentation
- Cloud Native Computing Foundation
- 12-Factor App Methodology
- Container Security Best Practices
You’re now ready to build and deploy production-grade cloud-native applications!
Complete these tasks to finish this step:
- Helm charts provide templated, reusable Kubernetes deployments
- Horizontal Pod Autoscaling ensures applications scale with demand
- Monitoring and observability are critical for production systems
- GitOps workflows enable automated, auditable deployments
- Load testing validates application performance under stress
Helm chart deployment fails
Check template syntax with 'helm template', verify image availability, and ensure proper RBAC permissions.
Pods not starting after deployment
Check pod logs with 'kubectl logs', verify resource limits, and ensure container images are accessible.
HPA not scaling pods
Verify metrics-server is installed, check resource requests are defined, and monitor HPA status with 'kubectl get hpa'.