Integrate AI with engineering systems, repositories, CI/CD platforms, documentation, observability tools, and operational data sources
Deploy and maintain production-grade AI services
Ensure observability, monitoring, evaluation, and operational reliability
Optimize AI systems for latency, cost, and scalability
Maintain secure and maintainable integrations
Requirements
Strong software engineering fundamentals
Strong Python skills
Experience building backend systems, APIs, and automation tooling
Experience with distributed systems and Linux environments
Practical experience with LLM systems and AI tooling
Experience with RAG architectures, embeddings, vector search, workflow orchestration, AI evaluation
Ability to build production systems, not just prototypes
Experience with CI/CD systems
Familiarity with Kubernetes, containers, observability tooling, infrastructure automation, cloud-native environments
Understanding of operational workflows and engineering lifecycle challenges
Nice-to-Have: Experience with OpenStack ecosystems, large-scale monorepo or multi-repository environments, engineering analytics and developer productivity metrics, familiarity with infrastructure observability and incident management systems
Tech Stack
Cloud
Distributed Systems
Kubernetes
Linux
OpenStack
Python
Benefits
Flexible hours and remote work options
Competitive compensation with different benefits depending on your location and type of contract
Recognition programs
Space for creativity and experimentation within the company’s goals
Supportive, engineering-driven culture with minimal bureaucracy
The chance to influence infrastructure decisions from day one
A smart, friendly team that values reliability, simplicity, and automation