Jobot is a renowned organization at the forefront of technological innovation, committed to delivering cutting-edge data solutions and digital products. They are seeking a highly skilled Senior Data Engineer to architect, deploy, and maintain next-generation data-driven platforms, bridging the gap between Data Science and Production Engineering.
Responsibilities:
- Design, build, and maintain robust data ingestion and transformation pipelines to support machine learning workloads and analytics
- Leverage modern orchestration tools to ensure reliable, observable, and scalable data flows
- Develop clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration
- Create shared libraries and glue services that connect cloud-native components to streamline data operations
- Design, develop, and deploy high-performance APIs using frameworks like FastAPI or Flask to serve ML predictions and core application logic
- Own end-to-end MLOps pipelines, including continuous training, deployment, versioning, and monitoring of machine learning models such as recommendation or personalization systems
- Architect and maintain scalable, fault-tolerant infrastructure utilizing Kubernetes (GKE) within Google Cloud Platform, ensuring high reliability and cost efficiency
- Collaborate closely with data scientists, software engineers, and platform teams to deliver integrated solutions
- Provide technical leadership and mentorship to junior engineers, fostering a culture of continuous learning and improvement
Requirements:
- Over 5 years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering
- Extensive expertise in Python for application development, data processing, and automation tasks
- Proven experience in building and deploying production-grade backend services and APIs using frameworks like FastAPI, Flask, or Django
- Strong SQL skills, including designing and optimizing schemas for relational and analytical data stores such as BigQuery and Cloud SQL
- Hands-on experience with data orchestration tools like Dagster or Airflow
- Deep knowledge of Google Cloud Platform services including BigQuery, Pub/Sub, Vertex AI, and Compute Engine
- Expertise in containerization and orchestration technologies such as Docker and Kubernetes, including Helm deployments
- Experience with Infrastructure as Code tools like Terraform or Crossplane
- CI/CD pipeline experience using GitHub Actions or similar tools
- Knowledge of observability stacks such as Prometheus, Grafana, and Cloud Logging
- Understanding of cloud security principles and enterprise compliance standards
- Experience supporting production MLOps workflows, including model monitoring, drift detection, and automated retraining