Designing, building and owning next-generation data product engineering patterns on modern cloud platforms including Snowflake and Databricks.
Providing technical guidance and mentorship to other data engineers, promoting consistent engineering practices and high‑quality solutions.
Developing reusable engineering assets such as frameworks, build kits, CI/CD templates, and performance optimization approaches.
Partnering with Enablement and Execution teams to operationalize and scale data engineering patterns across delivery teams, serving as a technical point of reference for adoption and implementation.
Evaluating, testing, and experimenting with emerging data and AI tools, platforms, and services.
Participate in technical proofs of concept, comparing alternative solutions, and making data-driven recommendations for platform and tool rationalization.
Documenting project outcomes, transition plans, adoption guides, and solution usage scripts to support enterprise rollout.
Supporting platform modernization efforts through hands-on development, tuning, and optimization.
Collaborating with data product owners, architects, and platform teams to align engineering solutions with enterprise data strategy.
Requirements
Bachelor’s Degree in a quantitative field such as computer science, data science, mathematics, or statistics.
6 to 8+ years of statistical and/or analytical experience.
Typically, 8+ years of experience in data engineering, analytics engineering, or platform engineering roles.
Deep understanding of financial institution/Banking concepts.
Strong understanding of modern data engineering concepts, including batch and streaming data processing, data modeling, and data product design.
Experience building scalable data solutions on cloud-based data platforms.
Familiarity with enterprise data ecosystems and shared platform models.
Ability to assess tradeoffs across tools, architectures, and implementation approaches.
Strong analytical and problem-solving skills with a focus on root cause analysis and optimization.
Proficiency with big data technologies (Spark, Airflow, Hadoop, Hive).
Hands-on experience with Snowflake and Databricks, including performance tuning.
Proficiency in SQL and Python, with experience building production-grade data pipelines.
Experience with CI/CD pipelines and infrastructure-as-code patterns for data platforms.
Familiarity with orchestration and workflow management tools.
Experience developing reusable libraries, templates, or internal frameworks.
Exposure to cloud platforms such as Azure, AWS, or GCP and cloud-native data services.
Understanding data quality, observability, and monitoring practices.
Familiarity with AI and ML tooling as it relates to data engineering and platform enablement is a plus.
Tech Stack
Airflow
AWS
Azure
Cloud
Google Cloud Platform
Hadoop
Python
Spark
SQL
Benefits
Healthcare (medical, dental, vision)
Basic term and optional term life insurance
Short-term and long-term disability
Pregnancy disability and parental leave
401(k) and employer-funded retirement plan
Paid vacation (from two to five weeks depending on salary grade and tenure)
Up to 11 paid holiday opportunities
Adoption assistance
Sick and Safe Leave accruals of one hour for every 30 worked, up to 80 hours per calendar year unless otherwise provided by law