Jobot is a leading organization dedicated to leveraging cutting-edge technology, data, and digital products to drive innovation and deliver impactful solutions. They are seeking a highly motivated Senior Data Engineer to lead the architecture, deployment, and operation of next-generation, data-driven platforms.
Responsibilities:
- Design, build, and maintain robust data ingestion and transformation pipelines to support machine learning workloads
- Leverage modern orchestration tools to ensure reliable, observable data flows across systems
- Develop clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration
- Create shared libraries and glue services that connect various cloud-native components
- Design, develop, and deploy high-performance Python APIs (FastAPI / Flask) to serve machine learning predictions and core application logic
- Own end-to-end pipelines for continuous training, deployment, versioning, and monitoring of ML models such as recommendation or personalization systems
- Architect and maintain scalable, fault-tolerant infrastructure using Kubernetes (GKE) within Google Cloud Platform, ensuring optimal performance and cost efficiency
- Collaborate closely with data scientists, software engineers, and platform teams to align engineering efforts and provide technical leadership
- Mentor junior engineers, fostering a culture of continuous learning and technical excellence
Requirements:
- Minimum of 5+ years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering
- Deep expertise in Python for application development, data processing, and automation tasks
- Proven experience in building and deploying production-grade backend services and APIs using frameworks such as FastAPI, Flask, or Django
- Strong SQL skills with experience designing and optimizing schemas for relational and analytical data stores like BigQuery and Cloud SQL
- Hands-on experience with data orchestration tools such as Dagster or Airflow
- Extensive experience designing and operating services within Google Cloud Platform, including BigQuery, Pub/Sub, Vertex AI, and Compute Engine
- Expert-level knowledge of Docker and Kubernetes, including Helm-based deployments
- Experience with Infrastructure as Code tools such as Terraform or Crossplane
- CI/CD experience using GitHub Actions or similar tools
- Familiarity with observability stacks like Prometheus, Grafana, and Cloud Logging
- Understanding of cloud security principles and enterprise compliance requirements
- Direct experience supporting production MLOps workflows, including model monitoring, drift detection, and automated retraining