Architect and own scalable, maintainable, and extensible data models for reliability, automation, lab, and telemetry-generated data.
Lead the design and evolution of database schemas that support cross-test analytics, traceability, repeatability, and long-term platform growth.
Design and implement robust ingestion and transformation pipelines across automated test systems, CI/CD workflows, lab infrastructure, and supporting engineering tools.
Define and standardize shared identifiers, metadata strategies, and data contracts that enable reliable correlation across runs, sessions, devices, builds, environments, and programs.
Design complex or novel data solutions in areas where standards, tooling, or historical patterns are limited, fragmented, or outdated.
Provide technical guidance and advisory support to reliability engineers, lab engineers, software/test automation engineers, and DevOps partners on data architecture, observability design, and scalable reporting patterns.
Influence and help establish engineering standards for schema design, ingestion patterns, telemetry structure, data governance, and dashboard consumption.
Enable observability workflows by structuring and integrating metrics, logs, events, and related telemetry into fit-for-purpose systems that support both operational debugging and strategic analysis.
Support the development of dashboards and reporting experiences for both technical users and leadership stakeholders, including engineering deep dives, program health reporting, and executive-level views.
Drive improvements in data quality, performance, consistency, integrity, and usability across the PSR ecosystem.
Identify architectural bottlenecks and lead remediation strategies across current data storage, schema design, pipeline reliability, and visualization workflows.
Create and maintain documentation, standards, and best practices for schema design, ingestion patterns, data governance, and dashboard enablement.
Mentor and influence other engineers through design reviews, technical recommendations, and practical guidance on scalable data solutions.
Contribute to and help shape a longer-term data and observability strategy that scales with evolving Pod programs, new test types, and future platform needs.
Requirements
Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, Information Systems, or a related technical field required; Master’s degree preferred.
7+ years of experience in data engineering, analytics engineering, platform engineering, or related roles; or 5+ years of experience with an advanced degree in a related field.
Strong experience architecting relational database schemas and modeling structured engineering or operational data for scale, maintainability, and long-term reuse.
Deep SQL expertise and hands-on experience with relational databases such as PostgreSQL or equivalent platforms.
Strong experience building and evolving ETL/ELT or other data ingestion and transformation pipelines in production environments.
Experience using Python or another programming language for data processing, automation, or integration tasks.
Strong understanding of observability and telemetry concepts, including metrics, logs, events, and time-series data.
Experience creating scalable reporting and visualization solutions in tools such as Grafana, Power BI, Tableau, or similar platforms.