Storable is redefining property management for specialty real estate. They are seeking a Senior Data DevOps Engineer to enhance their data platform and automation capabilities on AWS, focusing on scalability, automation, and reliability.
Responsibilities:
- Architect, implement, and maintain AWS-based data infrastructure leveraging DMS, Redshift, Glue, and Athena
- Build and manage Airflow DAGs for orchestrating ETL/ELT workflows
- Automate data migrations and integrations with AWS DMS and Glue
- Develop and maintain Terraform modules for data infrastructure
- Design and manage CI/CD pipelines for data workflows and infrastructure changes
- Optimise Redshift clusters, Glue jobs, and Athena queries for performance and cost efficiency
- Ensure reliable data delivery pipelines that power Looker dashboards and reports
- Implement observability (logging, monitoring, alerting) for data pipelines and services
- Enforce IAM, security policies, and compliance standards across the data ecosystem
- Collaborate with data engineers and analysts to improve data reliability, scalability, and governance
- Construct and manage Airflow DAGs for orchestrating ETL/ELT workflows
- Automate data migrations and integrations using AWS DMS and Glue
- Mentor junior engineers and disseminate best practices in DevOps/DataOps
Requirements:
- 7+ years of experience in DevOps, DataOps, or Cloud Data Engineering
- Strong hands-on expertise with AWS data services: DMS for database migrations/replication
- Strong hands-on expertise with AWS data services: Redshift for warehousing and performance tuning
- Strong hands-on expertise with AWS data services: Glue for ETL workflows
- Strong hands-on expertise with AWS data services: Athena for query optimization over S3
- Strong experience with Apache Airflow for pipeline orchestration
- Proficiency in Terraform for IaC (infrastructure as code)
- Scripting/automation expertise in Python and Bash
- Strong CI/CD knowledge (GitLab CI, GitHub Actions, Jenkins, or ArgoCD)
- Familiarity with Looker (modeling best practices, ensuring reliable data delivery to dashboards)
- Strong experience with monitoring and observability for data platforms (Grafana, Prometheus, Datadog, ELK)
- Solid understanding of cloud security, IAM, and networking in AWS
- Experience with streaming data platforms (Kafka, Kinesis)
- Exposure to MLOps tools (SageMaker, ML pipelines)
- Strong SQL skills and LookML knowledge for Looker optimization
- Prior experience leading or mentoring DevOps/DataOps engineers
- Knowledge of cost optimization and data governance frameworks in AWS