Pano AI is a growth-stage hybrid-remote startup focused on early wildfire detection and intelligence. They are seeking a Senior/Staff Design Quality Engineer to serve as the independent quality and systems-risk steward across hardware, firmware, software, and AI subsystems, ensuring safety and reliability in their products.
Responsibilities:
- Serve as the Design Quality / Systems Quality representative embedded in one or more product teams — influence requirements, architecture, and release criteria from concept to production and sustaining
- Lead and maintain system-level risk artifacts (e.g. system FMEAs/DFMEAs) and ensure traceability between risks, requirements, tests and mitigations
- Define and own verification & validation strategy at the system-level: coordinate test plans, field trials, test method validation, statistical acceptance criteria and objective evidence for release
- Drive risk-based decision making — prioritize mitigations given ambiguous tradeoffs and document rationale for release decisions and residual risk
- Select and drive execution of reliability and stress tests that emulate real-world field conditions (e.g. power/thermal cycling, packet-loss/high-latency networks, exposure, OTA update failure modes)
- Collaborate with AI and product analytics to define observability & telemetry needed for model performance monitoring in the field and tie those signals back into verification and risk mitigation strategies
- Mentor engineers and product teams on structured problem-solving (5-Whys, DMAIC, etc.) and quality best practices that scale across fast evolving hardware/AI products
Requirements:
- BS in Engineering (Mechanical, Electrical, Computer, Systems or related) or equivalent experience
- 10+ years' experience in design quality, reliability engineering, systems engineering, or similar roles for products that combine hardware, firmware and software (embedded + cloud + AI)
- Proven experience owning system-level risk artifacts and applying risk-based decision making
- Hands-on experience planning and executing verification & validation strategies, including test method selection/development and use of statistical techniques for acceptance criteria
- Deep product empathy and demonstrable experience driving tradeoffs to improve field outcomes or customer experience
- Excellent written and verbal communication skills — able to present technical findings and risk decisions to both engineers and executive stakeholders
- Prior experience with camera systems, imaging pipelines, sensor fusion, or edge AI deployments
- Experience with OTA update strategies, installed base telemetry, and telemetry-driven iteration of design and AI models
- Experience with statistical tools (Minitab, Python data stack) and reliability models (Weibull, MTBF/MTTF analysis)
- Experience mentoring team members and scaling quality processes in a growing company