Twilio is a company focused on shaping the future of communications through innovative solutions. They are looking for a Staff Machine Learning Engineer to join their Trust Intelligence Platform team, where the role involves designing and operating cloud-native data and ML infrastructure to enhance customer interactions.
Responsibilities:
- Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads
- Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling
- Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets
- Monitor, test, and improve data quality, model performance, latency, and cost
- Partner with product, data science, and security teams to ship resilient, compliant services
- Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices
- Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions
- Embrace Twilio’s “We are Builders” values by taking ownership of problems and driving them to completion
Requirements:
- B.S. in Computer Science, Data Engineering, Electrical Engineering, Mathematics, or related field—or equivalent practical experience
- 4-8 years building and operating data or ML systems in production
- Proficient in Python and SQL; comfortable with software engineering fundamentals (testing, version control, code reviews)
- Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Dagster) and cloud data warehouses (Snowflake, BigQuery, or Redshift)
- Familiarity with ML lifecycle tooling such as MLflow, SageMaker, Vertex AI, or similar
- Working knowledge of Docker and Kubernetes and at least one major cloud platform (AWS, GCP, or Azure)
- Understanding of data modeling, distributed computing concepts, and streaming frameworks (Spark, Flink, or Kafka Streams)
- Strong analytical thinking, communication skills, and a demonstrated sense of ownership, curiosity, and continuous learning
- Experience with Twilio Segment, Kafka/Kinesis, or other high-throughput event buses
- Exposure to infrastructure-as-code (Terraform, Pulumi) and GitHub-based CI/CD pipelines
- Practical knowledge of generative AI workflows, foundation-model fine-tuning, or vector databases
- Contributions to open-source data/ML projects or published technical presentations/blogs
- Domain experience in communications, marketing automation, or customer engagement analytics