Podimetrics is a rapidly growing virtual care management company focused on improving patient lives through early detection and prevention of diabetic foot ulcers. The Sr. Data Engineer will be responsible for building and maintaining the systems, pipelines, and models that support analytics and strategic decision-making, ensuring data accuracy and reliability.
Responsibilities:
- Design, build, and maintain critical infrastructure, including but not limited to enterprise data warehouse structures and pipelines in BigQuery
- Develop and maintain dbt models, tests, and documentation to standardize and transform data in line with our data strategy
- Ensure consistent, reliable data structures by owning data modeling standards, documentation, and best practices
- Monitor, troubleshoot, and improve existing pipelines with an emphasis on automation, maintainability, and data quality
- Partner with analysts, data scientists, software engineers, and business stakeholders to design datasets that are high quality, well-modeled, and optimized
- Design and implement data ingestion patterns from event streams, translating operational events into analytical structures in BigQuery
- Implement best practices for data testing, data lineage, and data governance within dbt and downstream tools
- Investigate and resolve data quality issues by identifying root causes and delivering sustainable, technically sound solutions
- Proactively identify opportunities to improve data reliability, scalability, and usability across the organization
- Use AI coding tools (LLMs, code generation, etc.) to accelerate development and set the standard for AI-native engineering practices on the team
Requirements:
- 8+ years of experience in data engineering or a related technical role
- Strong SQL skills and experience working with cloud-based warehouses (BigQuery strongly preferred)
- Strong software engineering fundamentals in Python - you write production-quality, maintainable code, not just scripts
- Hands-on experience with dbt - build, test, deploy, document models
- Experience with Git/GitHub and modern CI/CD practices for data
- Experience with structured logging, monitoring, and observability tools
- Experience with CI/CD for data (GitHub Actions, Cloud Build, Cloud Run, etc.)
- Experience working with or adjacent to ML/data science workflows - feature stores, training pipelines, or analytical modeling
- Demonstrated use of AI/LLM tools in an engineering workflow (Claude Code, Cursor, Copilot, etc.)
- Experience with event-driven architectures and message bus systems (GCP Pub/Sub, Kafka, or equivalent), and clear understanding of stream processing, event schemas, and consumer patterns