A-Line Staffing Solutions is seeking a Senior Data Engineer to support Data Warehousing in a Databricks and Azure SQL environment. The role involves designing, building, and maintaining scalable data pipelines and models to enable analytics and reporting, while ensuring data quality and compliance with regulations.
Responsibilities:
- Create and maintain optimal data pipeline Patterns/Architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
- Build data pipelines that extract, transform, and load data from a wide variety of data sources using Databricks and Azure technologies
- Design and implement data models
- Create automated tests to continuously monitor the quality of the data models
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
- Keep data separated and secure across Azure regions, ensuring HIPAA & HITECH and applicable regulation compliance
Requirements:
- Bachelor's degree in computer science, statistics, informatics, information systems or other quantitative fields
- 10+ years of experience in a Data Engineer/ BI Developer role
- Extensive SQL experience including complex query development, SSIS, performance tuning, and optimization in Azure SQL and distributed query environments
- Experience with Azure cloud services: Databricks, Azure Data Factory, Azure SQL db, Azure Data Lake Storage Gen2, PySpark, Logic Apps
- Experience with object-oriented/object function scripting languages: Python, Scala
- Experience with structured, semi structured and unstructured data
- Advanced proficiency in SQL, including complex query development, performance tuning, and optimization in Azure SQL and distributed query environments
- Strong hands on experience designing and building ETL/ELT pipelines using Databricks, leveraging Apache Spark (PySpark / Spark SQL) for large scale data ingestion, transformation, and processing
- Deep understanding of data lake and lakehouse architectures, including structured, semi structured, and incremental data ingestion patterns using Delta Lake, partitioning, and schema enforcement
- Proven ability to design and implement analytics ready data models (star, snowflake, and dimensional models) to support Power BI and other BI/analytics consumption patterns
- Experience managing metadata, data lineage, dependencies, and workload orchestration, ensuring reliable and repeatable data pipelines across development and production environments
- Strong analytical and troubleshooting skills, with the ability to perform root cause analysis across source systems, data pipelines, and downstream reporting to resolve data quality and performance issues
- Ability to collaborate closely with data architects, DBAs, BI developers, and data scientists to align engineering solutions with enterprise architecture and analytics standards
- Proactive mindset with the ability to identify data quality risks, pipeline failures, scalability constraints, and performance bottlenecks, and escalate or remediate them early
- Detail oriented with strong documentation, versioning, and governance discipline, supporting maintainable, auditable, and compliant data solutions
- Comfortable working in an Agile, cross functional environment, managing multiple pipelines and priorities while maintaining production stability
- Customer focused approach to delivering reliable, scalable, and business ready data assets that support operational and strategic decision making
- Strategic thinker who understands how data engineering choices impact cost, performance, scalability, and business outcomes
- Ability and willingness to mentor junior data engineers, promote Spark and Databricks best practices, and contribute to team technical standards
- Experience with data lake and data lakehouse architecture
- Experience with design and engineering for Databricks Delta tables, Spark Declarative Pipelines, Jobs/Workflows
- Experience with data pipeline architecture
- Experience with development/maintenance of APIs and WebServices