Advance Local is looking for a Data Engineer to build and maintain data pipelines and integration solutions for the cloud data platform. This role involves implementing data ingestion, transformation, and quality processes within Snowflake, AWS, and other platforms to support analytics and business decision-making.
Responsibilities:
- Implement data integration solutions working with platform owners across business units to ensure seamless data flow
- Build and maintain data pipeline that ingest data from various sources
- Develop scalable data preparation pipelines that serve ML modeling needs, reducing manual data engineering work by the data science team
- Build and maintain ML feature pipelines and model deployment workflows in Snowflake, enabling efficient model iteration and production deployment
- Support rapid prototyping of new data products by building flexible pipeline components and proof-of-concepts, enabling quick iteration and validation of ideas
- Develop solutions for data audience modeling, leveraging advanced data engineering techniques to enhance targeting and personalization
- Collaborate with data product, data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions
- Develop data transformations and quality checks to ensure reliable, clean data for downstream analytics and business intelligence
- Develop and maintain documentation for data engineering processes and systems
- Implement monitoring, alerting and logging for data pipelines and ML workflows to ensure reliability and quick issue resolution
- Troubleshoot and resolve data pipeline issues, escalating complex problems as needed
- Stay up to date with the latest data engineering technologies and industry trends
Requirements:
- Bachelor's degree in computer science, data engineering, information systems, or related field
- Minimum three years' experience in data engineering, with demonstrated proficiency in SQL and data modeling
- Experience with ETL tools, data integration frameworks and building data pipelines in Snowflake (SQL, stored procedures, streams, tasks)
- Hands-on experience with AWS services (S3, Lambda, Glue) or similar cloud data services
- Experience working with data scientists to operationalize ML models and build model training/inference pipelines
- Strong proficiency in Python for data processing and automation
- Familiarity with version control and CI/CD practices
- Understanding of audience segmentation, analytics and business use cases
- Understanding of data quality, testing and validation approaches
- Knowledge of data orchestration tools (Airflow, dbt or similar)
- Familiarity with ML workflows and model deployment patterns
- Strong problem-solving and analytical abilities with attention to detail
- Ability to work collaboratively in cross-functional teams
- Excellent communication skills for working with both technical and non-technical stakeholders