ElevenLabs is an AI research and product company transforming how we interact with technology. They are seeking an experienced AI Safety Engineer to drive the deployment and operationalization of automated moderation and guardrail systems that protect their platform and users.
Responsibilities:
- Design and build scalable backend infrastructure for content moderation, abuse detection and agents guardrails, deploying AI/ML models into production systems
- Architect robust APIs, data pipelines, and service architectures supporting real-time and batch moderation workflows
- Implement comprehensive monitoring, alerting, and observability systems; establish SLIs, SLOs, and performance benchmarks
- Partner with ML engineers to translate research models into production-ready systems and integrate them across our product suite
- Drive technical decisions and contribute vision to the safety roadmap on how the next generation of platform guardrails should be built for scale and precision
Requirements:
- 6+ years of backend software engineering experience building production systems at scale
- Strong production backend experience: distributed systems, APIs, data pipelines, and Python expertise (asynchronous Python, backend frameworks)
- Infrastructure & DevOps proficiency: cloud platforms (AWS/GCP), containerization (Docker/K8s), CI/CD pipelines
- Observability mindset with experience in monitoring tools (Prometheus, Grafana) and building observable systems
- Track record of taking products or systems from 0→1 with measurable impact, including deploying or working alongside ML/AI systems in production
- Trust & Safety, Content Moderation, or Integrity engineering experience
- MLOps experience: deployment, monitoring, and versioning of ML models
- Experience with SQL, data analysis tools, real-time streaming systems (Kafka, Redis), or event-driven architectures
- Familiarity with React or modern frontend frameworks