Own uptime, reliability, and performance of services running on AWS + Kubernetes (EKS).
Design and implement self-healing infrastructure using automation and AI agents.
Build LLM-powered operational tooling using APIs such as the OpenAI API for intelligent alert triage, incident summarization, root cause analysis, and runbook automation.
Manage and scale Kubernetes workloads including deployments, autoscaling, and resource optimization.
Build and evolve observability systems involving metrics (Prometheus), dashboards (Grafana), logs (ELK / OpenSearch), and tracing (OpenTelemetry).
Define and enforce SLOs, SLAs, and error budgets tied to business metrics.
Automate infrastructure using Terraform and CI/CD pipelines.
Lead incident response, postmortems, and continuous reliability improvements.
Introduce chaos engineering practices to proactively test system resilience.
Requirements
5+ years in SRE / DevOps / Platform Engineering.
Strong hands-on experience with AWS infrastructure at scale
Kubernetes (production-grade clusters)
Proven ability to debug complex distributed systems under pressure.
Strong coding skills (Python or Go)
Experience implementing monitoring, alerting, and incident management systems.
Bonus (AI / LLM Focus): Experience working with LLM APIs such as the OpenAI API; Familiarity with agent frameworks like LangChain and AutoGen; Built or experimented with AI agents for DevOps / SRE workflows; Exposure to AIOps or intelligent automation systems.