TEKsystems is a leading provider of business and technology services, seeking a data engineer to support a large healthcare client. The role involves designing scalable data pipelines, developing integration and transformation jobs, and providing technical guidance to junior engineers.
Responsibilities:
- Drive scope definition, requirements analysis, data and technical design, pipeline build, product configuration, unit testing, and production deployment
- Design scalable ingestion processes to bring on‑prem, API‑driven, third‑party, and end‑user‑generated data into a common cloud infrastructure
- Design reusable assets, components, standards, frameworks, and processes to accelerate and streamline data integration projects
- Develop data integration and transformation jobs using Python, SQL, and ETL/ELT tools
- Build infrastructure for optimal extraction, transformation, and loading of data from a wide variety of sources
- Build processes supporting data transformation, data structures, metadata management, dependency tracking, and workload orchestration
- Develop and implement scripts for data process maintenance, monitoring, and performance tuning
- Test and document data processes through data validation and verification procedures
- Ensure delivered solutions meet technical, functional, and non‑functional requirements
- Provide technical guidance and mentorship to junior engineers, promoting best practices in data engineering
- Design, build, and maintain scalable data pipelines using HVR and FiveTran
- Develop and optimize Snowflake data models, warehouses, and transformations
- Write clean, efficient SQL for data extraction, transformation, and analysis
- Monitor pipeline performance and troubleshoot issues in real time
- Contribute to best practices around data governance, documentation, and security
Requirements:
- python
- Snowflake
- Databricks dbt
- Data Replications(Qlik Replicate, Oracle GoldenGate)
- Data
- HVR
- FiveTran
- sql
- Python
- Healthcare experience
- Drive scope definition
- requirements analysis
- data and technical design
- pipeline build
- product configuration
- unit testing
- production deployment
- Design scalable ingestion processes
- Design reusable assets
- components
- standards
- frameworks
- processes to accelerate and streamline data integration projects
- Develop data integration and transformation jobs using Python, SQL, and ETL/ELT tools
- Build infrastructure for optimal extraction, transformation, and loading of data from a wide variety of sources
- Build processes supporting data transformation
- data structures
- metadata management
- dependency tracking
- workload orchestration
- Develop and implement scripts for data process maintenance
- monitoring
- performance tuning
- Test and document data processes through data validation and verification procedures
- Ensure delivered solutions meet technical, functional, and non‑functional requirements
- Provide technical guidance and mentorship to junior engineers
- promoting best practices in data engineering
- Design, build, and maintain scalable data pipelines using HVR and FiveTran
- Develop and optimize Snowflake data models
- warehouses
- transformations
- Write clean, efficient SQL for data extraction, transformation, and analysis
- Monitor pipeline performance and troubleshoot issues in real time
- Contribute to best practices around data governance
- documentation
- security
- Azure Fabric
- Epic Clarity and Caboodle exposure
- healthcare provider experience