Quilt Software is a company that provides tools for local businesses to compete in the retail space. They are seeking a Senior Data Engineer to design and optimize data platforms, enabling teams to make data-driven decisions and ensuring data quality and performance.
Responsibilities:
- Design and build data pipelines
- Develop, maintain, and optimize ETL/ELT pipelines on Databricks and Spark
- Integrate data from multiple internal and external sources into a centralized data platform
- Design and maintain robust data models (e.g., star/snowflake schemas, data vault, dimensional models) to support analytics and self-service BI
- Establish and enforce data modeling standards and documentation
- Implement data quality checks, validation frameworks, and monitoring
- Tune queries and jobs for performance and cost efficiency in Databricks and downstream systems
- Partner with data analysts, data scientists, and product/engineering teams to understand data needs and translate them into technical solutions
- Provide technical leadership and mentorship to other data engineers; help review designs and code
- Contribute to and refine our data governance, security, and access control practices
- Drive best practices around version control, CI/CD for data, and code standards
Requirements:
- 7+ years of professional experience as a Data Engineer, Software Engineer, or similar role
- Strong hands-on experience with Databricks (or a very similar cloud data platform) including cluster management, jobs, and notebooks
- Advanced experience with Apache Spark for batch and/or streaming data processing
- Expert-level SQL skills (complex joins, window functions, query optimization)
- Strong Python skills for data engineering (e.g., PySpark, data processing libraries, scripting)
- Proven experience in data modeling and designing schemas for analytics and reporting
- Experience building and maintaining data pipelines in a cloud environment (AWS, Azure, or GCP)
- Strong understanding of data warehousing concepts, ETL/ELT best practices, and data lifecycle
- Solid software engineering fundamentals: version control (git), testing, code reviews, and CI/CD
- Excellent communication skills and the ability to collaborate with technical and non-technical stakeholders
- Experience in payments, fintech, banking, or broader financial services (e.g., transaction data, ledgers, risk, fraud, reconciliation)
- Experience with streaming technologies (e.g., Spark Structured Streaming, Kafka, Kinesis, or similar)
- Familiarity with dbt or similar transformations-as-code frameworks
- Experience with orchestration tools (e.g., Airflow, Databricks Workflows)
- Knowledge of BI tools (e.g., Power BI, Tableau, Looker) and how data models power them
- Exposure to machine learning workflows and supporting data science teams
- Experience implementing data governance, lineage, and catalog tools