Define the evaluation framework for AI agents, specifically focusing on hallucination detection, reasoning accuracy, and non-deterministic outcome validation in OpenShift environments.
Ensure AI agents used in the SDLC adhere to Red Hat’s enterprise-grade security primitives, particularly regarding secure tool-use and data leakage.
Lead the research and integration of AI/ML technologies to innovate our quality processes, including automated test generation, predictive defect analysis, and intelligent system diagnostics.
Architect "Self-Healing" test suites where AI agents autonomously identify regressions and rewrite test logic based on Spec-Driven changes.
Define and oversee the execution of the end-to-end quality strategy for the OpenShift Container Platform, ensuring alignment with product goals and enterprise standards.
Architect, design, and lead the development of scalable, intelligent test automation frameworks and infrastructure, empowering developers to test their features efficiently.
Govern the failure analysis and bug triage process, analyzing trends to identify systemic quality gaps and driving long-term resolutions.
Coordinate with cross component teams to be responsible for bug verification, regression testing.
Work with the product release related teams to be responsible for the product delivery related testing.
Requirements
Bachelor's degree or higher, or equivalent in computer science or a related field
10+ years of experience in software quality engineering, with a demonstrated track record in test architecture, strategy, and leadership roles.
Deep experience with RAG (Retrieval-Augmented Generation) evaluation and benchmarking LLM performance in technical domains (e.g., code generation quality)
Experience applying AI/ML concepts to quality engineering challenges, with familiarity in using AI for test optimization, failure prediction, or data analysis.
Mastery in designing complex, multi-step prompts for AI agents to predict edge cases in distributed microservices.
Expert-level knowledge of designing test strategies for complex, large-scale distributed systems and microservices environments like Kubernetes or Docker.
Expert-level programming skills in Go or Java, with proven experience architecting and building large-scale test automation frameworks from the ground up.
Proven ability to influence cross-functional teams, including development, product management, and support, to drive a unified vision for product quality.
Knowledge of the Linux operating system (any distribution)
Solid written and verbal communication skills in English.
Hands-on with AI Agent to predict edge cases and maintain high coverage across complex distributed systems.
Modernization of QA environments utilizing AI-augmented development and spec-driven methodologies to transform the testing lifecycle