Fairway Independent Mortgage Corporation is a nationwide leader in the mortgage industry, committed to delivering personalized loan solutions. They are seeking a Data Engineer to architect and scale their next-generation big data ecosystem, focusing on designing ETL pipelines, integrating diverse data sources, and ensuring compliance with global standards.
Responsibilities:
- Build and maintain high-performance, scalable ETL/ELT pipelines using Databricks to ensure seamless data flow
- Architect robust integrations across a diverse ecosystem, including internal/external APIs, relational databases, and real-time streaming sources
- Implement sophisticated transformation logic, data enrichment, and cleansing protocols to deliver 'analytics-ready' datasets for downstream visualization
- Develop high-efficiency data models and schemas, applying advanced indexing and partitioning techniques to maximize query performance and scalability
- Oversee and optimize cloud-based infrastructure, leveraging containerization and distributed computing to process massive datasets
- Utilize big data frameworks to empower Machine Learning (ML) initiatives and collaborate with Data Science teams to deploy predictive models and statistical algorithms
- Design and implement 'Zero-Touch' automation for CI/CD, encompassing build, test, and release processes to ensure rapid, reliable deployment
- Execute rigorous data governance, lineage tracking, and access controls to ensure total compliance with global privacy regulations and internal quality standards
- Maintain comprehensive technical documentation for system configurations, data mapping, and architectural processes
Requirements:
- 6+ years of proven experience architecting and developing enterprise-scale data solutions
- Extensive hands-on experience with the Azure Data Stack, including Synapse, Data Factory, Databricks, CosmosDB, and Azure SQL
- Expert-level command of SQL and Python (required); additional proficiency in JavaScript, R, VBA, or web technologies (HTML/CSS/JSON) is highly valued
- Demonstrated success in building and managing complex ETL/ELT pipelines within both Databricks and modern Business Intelligence environments (e.g., Power BI, Sisense, Tableau, or Domo)
- Deep experience leveraging API calls to ingest, synchronize, and automate data flow between disparate third-party and internal systems
- Ability to bridge the gap between backend data engineering and front-end visualization, specifically optimizing Power BI for high-performance reporting
- A track record of designing, testing, and scaling automated data systems that drive operational efficiency, measurable cost savings, and improved business outcomes
- Experience working directly with business partners to translate high-level goals into technical workflows that enhance decision-making