Torque Technologies LLC is seeking a Senior Snowflake Data Engineer to build and operate reliable, scalable data pipelines and curated data products on the Snowflake Data Cloud. This hands-on engineering role focuses on Python-driven data engineering, robust ETL/ELT, and modern transformation practices while collaborating with analytics, data science, platform, and security teams to deliver high-quality datasets.
Responsibilities:
- Build and maintain batch and/or near-real-time ETL/ELT pipelines landing data into Snowflake (raw → curated → consumption layers)
- Develop Python data engineering components (connectors, orchestration logic, framework utilities, testing tools, and automation) supporting BI and ML use cases
- Implement transformation frameworks in dbt Core: project structure standards, modular models, macros, tests, documentation, and environment-based deployments
- Use OpenFlow to build and operationalize ingestion/flow patterns, including configuration, scheduling, troubleshooting, and performance tuning
- Design data models optimized for consumption: curated marts for BI, and ML-ready datasets/features with repeatable refresh patterns
- Apply data quality and reliability practices: automated testing, schema drift handling, idempotent loads, backfills, and reconciliation checks
- Tune Snowflake performance and cost for pipelines: warehouse sizing, clustering/partitioning strategy where appropriate, incremental processing, and query optimization
- Enable cross-account patterns aligned to the multi-account strategy (promotion between environments, sharing curated datasets, deployment consistency)
- Build operational excellence: pipeline observability, alerting, runbooks, incident response participation, and root-cause analysis
- Collaborate with platform/security teams to align pipelines with governance controls (RBAC, secure data access patterns) without blocking delivery
Requirements:
- 12+ years of data engineering experience, including significant delivery on Snowflake in production
- Strong Python skills (clean, testable code; packaging; logging/metrics; performance-aware data processing)
- Strong SQL and data modeling fundamentals (dimensional and/or domain-oriented modeling)
- Hands-on experience with dbt Core (models, macros, tests, docs, deployments, CI practices)
- Hands-on experience with OpenFlow (building/running flows, operational support, troubleshooting)
- Proven experience designing and operating ETL/ELT pipelines (incremental loads, CDC concepts, error handling, and backfills)
- Experience working in cloud environments (AWS/Azure/GCP) and production operations (monitoring, on-call/incident response, SLAs)
- Comfortable working across teams (analytics, ML, platform/security) and translating requirements into deliverable datasets
- Experience supporting BI workloads (semantic-friendly marts, performance considerations, consumption patterns)
- Experience supporting ML workflows (feature-ready datasets, reproducible training data, lineage and governance)
- Familiarity with Snowflake governance features (masking/row access policies, secure views) and multi-account deployment patterns
- CI/CD and automation (Git workflows, build pipelines, infra-as-code such as Terraform)
- Experience with common ingestion/orchestration tools (Airflow, Dagster, Prefect, etc.) or ELT tools (Fivetran/Matillion/Informatica)