Collaborate with stakeholders across the organization to design and implement scalable, cloud-based data solutions, integrating generative AI to drive innovation.
Work closely with cross-functional stakeholders (finance, product, marketing, customer support, tech, data science) to enable trusted data products for internal decision making and external-facing tools.
Take a leading role in the development of a data lake resource to complement our existing data warehouse.
Work with AWS services, automation tools, machine learning, and generative AI to enhance efficiency, stability, security, and performance.
Operate and evolve our Postgres data warehouse: schema design, performance tuning, indexing, access controls, etc.
Build analytics-ready datasets supporting sustainability measurement, supply-chain insights, and business metrics.
Deploy and maintain multiple instances of Cube.dev semantic layers with standardized configuration, CI/CD workflows, and governance practices.
Support integration and deployment of genAI-enabled workflows, especially NLP-based use cases (classification, extraction, normalization, embeddings/similarity).
In collaboration with data scientists, research and develop practical transition plans for evolving selected relational/warehouse data structures into a graph-based knowledgebase.
Requirements
5+ years of professional experience in data engineering, analytics engineering, or data platform engineering.
Advanced SQL expertise and strong experience with relational databases, especially Postgres.
Strong Python development skills applied to data pipelines, automation, and operational tooling.
Strong Git-based development practices (branching, PRs, code review).
Demonstrated experience developing and supporting DBT transformations and operational workflows.
Hands-on experience building AWS ingestion/ETL workflows using services such as S3, IAM, Glue, Lambda, CloudFormation (or other IaC), and AppFlow.
Experience with analytics data modeling and metric definition practices.
Experience implementing automated monitoring/alerting and data quality controls for pipelines and critical datasets.
Experience operating production data systems (including data quality tests, regression checks, validation frameworks, incident triage, root-cause analysis, runbooks, reliability improvements).
Experience working closely with analytics teams and cross-functional stakeholders; familiarity with Jira/Confluence and Agile delivery.
Familiarity with data security practices (PII protection, encryption controls, access management).
Tech Stack
AWS
Cloud
Docker
ETL
Postgres
Python
SQL
Benefits
Medical, Dental, and Vision Insurance are offered through multiple PPO options. Worldly covers 90% employee premium and 60% spouse/dependent premium.
Company-sponsored 401k with up to 4% match for US employees.
Incentive Stock Options.
100% Parental Paid Leave.
Unlimited PTO.
12 paid company holidays.
Earn a competitive salary and performance-based bonuses.
Use the office stipend to get the supplies you need—combat Zoom fatigue with no-meeting Fridays.
Flexible time off. Take the time you need to recharge. Our culture encourages team members to explore and rest to be their best selves.
Join the culture committee, coffee chats, or a variety of other interest groups.