Develop integrations between data sources and target systems including ERPs, CRMs, and data warehouses using Databricks and AWS-native services (Glue, Step Functions, Lambda)
Configure, customize, and deploy data engineering applications that support multiple data domains reliably and at scale
Leverage the Databricks Lakehouse platform — Delta Lake, Unity Catalog, and Spark-based processing — to optimize pipeline performance and maintainability
Establish and Uphold Data Quality
Develop and enforce data cleansing and standardization guidelines that keep data consistent and trustworthy across systems
Use strong SQL skills to validate, troubleshoot, and resolve data issues before they surface downstream
Partner with data architects to set quality standards that the broader team can operate against
Connect Systems Through Integration
Build integrations using APIs and modern pipeline approaches to connect systems that weren't designed to work together
Align pipeline design with enterprise data flows in close collaboration with data scientists and analysts
Ensure integrations are built for durability, not just initial delivery
Support Data Platform Users
Work directly with business users and data stewards to diagnose and resolve data issues within the platform
Translate technical pipeline behavior into clear explanations for non-engineering stakeholders
Contribute to documentation and processes that make the data platform easier to use and maintain over time
Requirements
Bachelor's degree in Computer Science, Information Systems, or a related field — or equivalent professional experience
5+ years of hands-on experience in data engineering development or implementation
Strong SQL skills across data analysis, validation, and troubleshooting
Hands-on experience with Databricks (Delta Lake, Unity Catalog, Spark) and AWS data services (Glue, Redshift, S3, Lambda, or Step Functions)
Familiarity with APIs and integration methods for connecting systems across an enterprise
Experience with Databricks MLflow or Feature Store supporting AI/ML pipeline workflows (Nice to Have)
Familiarity with marketing data ecosystems: CRM platforms, CDP architectures, or Martech/Adtech data flows (Nice to Have)
Exposure to data observability or governance tooling (lineage tracking, data cataloging, pipeline monitoring) (Nice to Have)
Experience in a digital agency, marketing services, or consulting environment with multiple concurrent client data environments (Nice to Have)
Working knowledge of streaming data pipelines or event-driven architectures (e.g., Kafka, Kinesis) (Nice to Have)
Tech Stack
Amazon Redshift
AWS
Kafka
Spark
SQL
Unity
Benefits
medical, dental, and vision coverage
a 401(k) retirement plan
paid holidays
Flexible Time Off (FTO)
additional programs focused on wellness, financial security, and professional growth