OneMagnify is an AI native, platform-enabled B2B digital agency operating at the intersection of data, technology, and creativity. As a Data Engineer, you'll build and maintain the data pipelines and integrations that power enterprise analytics and client-facing solutions, ensuring data reliability and quality for large B2B clients.
Responsibilities:
- Develop integrations between data sources and target systems including ERPs, CRMs, and data warehouses using Databricks and AWS-native services (Glue, Step Functions, Lambda)
- Configure, customize, and deploy data engineering applications that support multiple data domains reliably and at scale
- Leverage the Databricks Lakehouse platform — Delta Lake, Unity Catalog, and Spark-based processing — to optimize pipeline performance and maintainability
- Develop and enforce data cleansing and standardization guidelines that keep data consistent and trustworthy across systems
- Use strong SQL skills to validate, troubleshoot, and resolve data issues before they surface downstream
- Partner with data architects to set quality standards that the broader team can operate against
- Build integrations using APIs and modern pipeline approaches to connect systems that weren't designed to work together
- Align pipeline design with enterprise data flows in close collaboration with data scientists and analysts
- Ensure integrations are built for durability, not just initial delivery
- Work directly with business users and data stewards to diagnose and resolve data issues within the platform
- Translate technical pipeline behavior into clear explanations for non-engineering stakeholders
- Contribute to documentation and processes that make the data platform easier to use and maintain over time
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a related field — or equivalent professional experience
- 5+ years of hands-on experience in data engineering development or implementation
- Strong SQL skills across data analysis, validation, and troubleshooting
- Hands-on experience with Databricks (Delta Lake, Unity Catalog, Spark) and AWS data services (Glue, Redshift, S3, Lambda, or Step Functions)
- Familiarity with APIs and integration methods for connecting systems across an enterprise
- Experience with Databricks MLflow or Feature Store supporting AI/ML pipeline workflows
- Familiarity with marketing data ecosystems: CRM platforms, CDP architectures, or Martech/Adtech data flows
- Exposure to data observability or governance tooling (lineage tracking, data cataloging, pipeline monitoring)
- Experience in a digital agency, marketing services, or consulting environment with multiple concurrent client data environments
- Working knowledge of streaming data pipelines or event-driven architectures (e.g., Kafka, Kinesis)