TEKsystems is a leading provider of business and technology services, seeking a Data Engineer to support two squads focused on data migration and integration projects. The role involves working with Snowflake, DbT, and Python to manage data pipelines, ensure data quality, and facilitate data transformation processes.
Responsibilities:
- Help with connecting GAM and AOS, migrating data from the homegrown system to AOS, transforming any data needed, normalizing data across platforms from a standards/data quality perspective, as well as some automating of flows and reporting
- Overhauling campaign management system; communication skills are important here as well as custom API writing skills with the DbT and Snowflake standard Data Engineer tech skills
- Snowflake — advanced modeling, performance tuning, warehouse optimization, semi structured handling, curated/Gold layer design
- Dbt (Data Build Tool) — complex transformations, tests (unique/not null/relationships/custom), metrics modeling, CI integration, documentation
- Python — core language for: API consumption/custom connectors, Automation jobs, Data transformations, Reusable libraries & validation frameworks
- Airflow — production DAGs, dependency management, SLAs, retries, backfills, data quality checks, environment aware configs
- AWS Data Stack — especially S3, EMR, Lambda, Kinesis, Glue/Athena, IAM basics for secure pipeline operation
- Fivetran — managing connectors, sync strategies, and multi system ingestion
- Semi structured data — JSON, Parquet, gz; flattening, metadata driven ingestion
- Validation frameworks — source to target reconciliation, boundary checks, null audits, row level testing
- PII protection / access governance — experience with tools like Immuta, Atlan, or tokenization libraries (e.g., Protegrity)
- Documentation discipline — data dictionaries, lineage, flow diagrams, runbooks
Requirements:
- Data Engineer Expertise
- Snowflake & DbT
- SQL & Python
- Data Warehousing & Data Pipelining Core Understanding
- advanced modeling, performance tuning, warehouse optimization, semi structured handling, curated/Gold layer design in Snowflake
- complex transformations, tests (unique/not null/relationships/custom), metrics modeling, CI integration, documentation in dbt (Data Build Tool)
- core language for Python including API consumption/custom connectors, Automation jobs, Data transformations, Reusable libraries & validation frameworks
- production DAGs, dependency management, SLAs, retries, backfills, data quality checks, environment aware configs in Airflow
- AWS Data Stack knowledge, especially S3, EMR, Lambda, Kinesis, Glue/Athena, IAM basics for secure pipeline operation
- managing connectors, sync strategies, and multi system ingestion in Fivetran
- experience with semi structured data including JSON, Parquet, gz; flattening, metadata driven ingestion
- source to target reconciliation, boundary checks, null audits, row level testing in Validation frameworks
- experience with PII protection / access governance tools like Immuta, Atlan, or tokenization libraries (e.g., Protegrity)
- documentation discipline including data dictionaries, lineage, flow diagrams, runbooks