NuView Health is a company that helps growth-stage organizations accelerate insights from their data through analytics and data science. They are seeking a Data Engineer to build scalable data infrastructure, design pipelines, and ensure data quality for their clients. This role combines technical execution with strategic thinking and requires collaboration with analysts and data scientists.
Responsibilities:
- Design, build, and maintain scalable data pipelines for clients across industries
- Architect and optimize cloud data warehouse solutions, adapting to each client's stack, which may include Snowflake, BigQuery, Redshift, Microsoft Fabric, or similar platforms
- Lead data integration projects from source system to analytical layer, including scoping, delivery, and handoff
- Work fluidly across a range of modern data tools and platforms as client engagements demand, picking up new technologies quickly and applying best practices regardless of the toolset
- Collaborate with analysts and data scientists to ensure data is clean, reliable, and well-modeled
- Champion data quality, testing, and observability best practices across client engagements
- Produce and maintain clear technical documentation including pipeline architecture, data dictionaries, lineage maps, and runbooks so clients can understand and own their infrastructure long-term
- Document engineering decisions, standards, and workflows in a way that supports knowledge transfer to both clients and junior team members
- Research and evaluate new technologies and advocate for tooling investments that benefit the firm
- Train and mentor junior team members on engineering standards, pipeline design, and best practices
- Participate in client-facing communication, including requirements gathering and progress updates
- Flex support when capacity allows: contribute to analyst-side deliverables such as Power BI dashboard development, ad-hoc reporting, or data visualization. We're a lean team and value versatility
- ETL/ELT pipeline development and optimization
- Data warehouse modeling (dimensional, medallion/lakehouse architectures)
- Data integration across client systems such as CRM, ERP, marketing, and operational systems
- Infrastructure setup across the modern data stack (ingestion, transformation, orchestration)
- Implementations across platforms such as Microsoft Fabric, Databricks, and Snowflake, meeting clients where they are
- Data modeling and deployment across medallion architecture layers (bronze, silver, gold)
- Data quality frameworks and automated pipeline testing
- Cloud infrastructure provisioning and cost optimization (Azure, AWS, GCP)
- Technical documentation projects including data dictionaries, ER diagrams, lineage documentation, and metrics catalogs
- Power BI semantic model development and dashboard support when business needs require it
Requirements:
- Bachelor's Degree in Computer Science, Engineering, Mathematics, or a related field
- 2–5+ years of relevant data engineering or software engineering experience
- SQL Expert: complex query authoring, query optimization, stored procedures
- Python Required: pipeline scripting, automation, data processing
- Transformation Tools: dbt required; Spark experience a plus
- Ingestion Tools: Fivetran, Airbyte, Rivery, Microsoft Fabric Data Factory, or similar
- Orchestration: Airflow, Prefect, Azure Data Factory, Microsoft Fabric, or equivalent
- Cloud Platforms: Azure (preferred), AWS, or GCP experience
- Data Warehouses: Snowflake, BigQuery, Redshift, Microsoft Fabric, Azure Synapse, or equivalent
- Version Control: Git required; branching strategies, pull requests, and code review workflows
- Strong communication skills with the ability to translate technical concepts for non-technical stakeholders
- Self-starter who thrives in a remote environment and can manage multiple client workstreams
- Player-coach mindset: capable of leading projects while growing junior teammates
- Intellectually curious about evolving data tooling, architecture patterns, and AI-augmented engineering