Elios Talent is seeking a Senior Platform Data Engineer to support a high-visibility, year-long platform transformation initiative. This role focuses on building and operating a centralized, enterprise-grade data platform, requiring end-to-end ownership of design, build, test, deploy, monitor, and continuous improvement of data models and pipelines.
Responsibilities:
- Build and maintain ingestion pipelines from raw Bronze landing through curated Gold domain models
- Develop transformation models using dbt with testing, Slim CI, and exposure tracking
- Author and manage production-grade Airflow DAGs with SLA monitoring and backfill strategies
- Design and implement streaming and event-driven pipelines (Kafka, Glue Streaming, Delta Live Tables)
- Collaborate with data scientists, platform engineers, and product teams to ship reliable data products
- Implement data quality frameworks and enforce governance standards
- Containerize workloads and deploy to Kubernetes (EKS)
- Contribute to infrastructure as code (Terraform provisioning for S3, IAM, Glue, etc.)
- Document lineage, metadata, ownership, and sensitivity classification
Requirements:
- Bachelor's degree or equivalent experience
- 5–8+ years of experience developing data products and platforms
- Strong background in Agile software development
- Demonstrated success in prior engineering roles
- Strong analytical and problem-solving capabilities
- Python production-level standards (type hints, Pydantic, pytest, Ruff, mypy)
- SQL advanced analytical queries, window functions, optimization
- Experience with Redshift, Snowflake, or Databricks SQL
- dbt Gold-layer modeling and testing
- Slim CI, exposures, meta configurations
- Apache Airflow (DAG authoring, task dependencies, SLAs, backfills)
- Kafka topic consumption
- Hands-on with at least one Cloud Data Platform (Databricks, AWS)
- Data Quality frameworks (Great Expectations, dbt tests, or equivalent)
- Docker & Kubernetes containerized jobs
- Terraform familiarity
- Data Modeling (Dimensional modeling, Slowly Changing Dimensions, Medallion architecture design)
- Experience with Prefect or Dagster
- Financial or time-series modeling experience
- SaaS ingestion via Fivetran, Airbyte, or custom connectors (REST, SFTP, JDBC)
- Change Data Capture (Debezium or connector-based CDC modes)
- Handling semi-structured and unstructured data (PDF, Excel, JSON)
- Designing Bronze schemas for fidelity and downstream flexibility
- Vector embedding pipelines (chunking strategies, embeddings, pgvector/OpenSearch)
- Data lineage tooling (OpenLineage, DataHub, Unity Catalog)
- Governance standards: PII handling, sensitivity classification, column masking, partition-level access
- Migration of legacy data assets (Excel, Access, shared drives)
- ML feature engineering and point-in-time correct pipelines
- Exposure to financial reporting, chart of accounts normalization, or diligence data
- Experience in private equity, investment banking, asset management, or professional services
- Open-source data tooling contributions