Storable is redefining property management for specialty real estate, and they are seeking a Senior Data DevOps Engineer to enhance their data platform and automation capabilities on AWS. The role focuses on designing, implementing, and optimizing data pipelines and infrastructure while collaborating with various teams to establish a secure and high-performance data ecosystem.
Responsibilities:
- Architect, implement, and maintain AWS-based data infrastructure leveraging DMS, Redshift, Glue, and Athena
- Build and manage Airflow DAGs for orchestrating ETL/ELT workflows
- Automate data migrations and integrations with AWS DMS and Glue
- Develop and maintain Terraform modules for data infrastructure
- Design and manage CI/CD pipelines for data workflows and infrastructure changes
- Optimise Redshift clusters, Glue jobs, and Athena queries for performance and cost efficiency
- Ensure reliable data delivery pipelines that power Looker dashboards and reports
- Implement observability (logging, monitoring, alerting) for data pipelines and services
- Enforce IAM, security policies, and compliance standards across the data ecosystem
- Collaborate with data engineers and analysts to improve data reliability, scalability, and governance
- Construct and manage Airflow DAGs for orchestrating ETL/ELT workflows
- Automate data migrations and integrations using AWS DMS and Glue
- Mentor junior engineers and disseminate best practices in DevOps/DataOps
Requirements:
- 7+ years of experience in DevOps, DataOps, or Cloud Data Engineering
- Strong hands-on expertise with AWS data services: DMS for database migrations/replication, Redshift for warehousing and performance tuning, Glue for ETL workflows, Athena for query optimization over S3
- Strong experience with Apache Airflow for pipeline orchestration
- Proficiency in Terraform for IaC (infrastructure as code)
- Scripting/automation expertise in Python and Bash
- Strong CI/CD knowledge (GitLab CI, GitHub Actions, Jenkins, or ArgoCD)
- Familiarity with Looker (modeling best practices, ensuring reliable data delivery to dashboards)
- Strong experience with monitoring and observability for data platforms (Grafana, Prometheus, Datadog, ELK)
- Solid understanding of cloud security, IAM, and networking in AWS
- Experience with streaming data platforms (Kafka, Kinesis)
- Exposure to MLOps tools (SageMaker, ML pipelines)
- Strong SQL skills and LookML knowledge for Looker optimization
- Prior experience leading or mentoring DevOps/DataOps engineers
- Knowledge of cost optimization and data governance frameworks in AWS