Overview:
This intensive workshop is meticulously designed for experienced technology professionals with prior Kubernetes knowledge who want to master advanced Kubernetes use cases.
You will gain practical, hands-on experience deploying, managing, and scaling AI/ML workloads in a GitOps framework, and explore how Kubernetes can serve as a runtime for agentic AI systems.
Key Objectives:
By the end of the workshop, participants will:
✅ Deploy and scale GenAI and ML workloads on Kubernetes
✅ Implement observability and autoscaling with custom Prometheus metrics
✅ Package and deliver workloads using Helm and ArgoCD
✅ Extend Kubernetes with Operators and Custom Resource Definitions (CRDs)
✅ Run Kubernetes as a runtime for agentic AI systems
✅ Apply Kubernetes-native support for GPUs and AI enhancements
Run a containerized LLM inference service on Kubernetes
Hands-on Lab**:
Monitor and auto-scale AI APIs
Hands-on Lab**:
Automate GenAI delivery across environments
Hands-on Lab**:
Use LLM-powered agents to *assist* platform users with operational tasks, YAML generation, error resolution, and lifecycle automation.
Hands-on Lab**:
Using `kubectl-ai`
Deploying Kubernetes MCP Server
Run agent-based workflows (not tied to Argo)
Hands-on Lab**:
Demo + Visuals**: