RadarFirst is transforming how organizations handle incidents and compliance with automated, purpose-built SaaS solutions. They are seeking a Lead Software Development Engineer in Test to architect and evolve their quality engineering strategy, focusing on automated testing frameworks and AI-assisted QA practices.
Responsibilities:
- Own and evolve the organization’s automated testing framework and QA tooling ecosystem
- Define and standardize best practices for AI-assisted test case generation, test data generation, and coverage discovery
- Establish responsible AI guardrails for QA (validation, hallucination mitigation, structured outputs, and review standards)
- Implement intelligent test selection and regression analysis to reduce cycle time without increasing risk
- Ensure AI-augmented tests remain deterministic, maintainable, and production-ready
- Architect scalable automation across unit, integration, API, UI, contract, performance, and security testing
- Champion modern testing principles (test pyramid / risk-based testing) and ensure each layer of the stack is appropriately tested
- Integrate automation into CI/CD with enforceable quality gates and release readiness criteria
- Lead root-cause analysis for systemic quality issues and recurring defects
- Design evaluation frameworks for AI-assisted features (ground-truth testing, deterministic validation, and prompt robustness)
- Implement measurable quality metrics (regression consistency, precision/recall where applicable, drift detection)
- Introduce traceability mechanisms for AI outputs including logging and version-aware regression suites
- Ensure human-in-the-loop validation is implemented where appropriate
- Mentor engineers in modern automation practices and AI-augmented workflows
- Train teams on effective prompt engineering for QA use cases and validation of AI-generated test assets
- Establish review standards for AI-assisted code contributions
- Define and track KPIs such as defect escape rate, automation coverage by risk, and test stability index
- Provide data-backed release readiness recommendations including Go/No-Go guidance
- Continuously audit automation ROI and retire low-value tests
- Partner with Engineering, Product, Security, and DevOps to embed quality early and drive shift-left testing
- Represent quality in architectural discussions and influence roadmaps
- Ensure accessibility, security, and privacy testing practices are integrated into the SDLC
Requirements:
- 8+ years of experience in software quality engineering, test automation, or SDET roles
- 3+ years leading automation strategy or serving as a senior technical QA authority
- Hands-on experience using AI tools for test generation, refactoring, and automation maintenance
- Experience evaluating LLM-based systems (prompt robustness, structured outputs, drift detection, and versioned regression suites)
- Strong proficiency in at least one programming language (TypeScript, Java, Python, etc.) and modern test frameworks (e.g., Playwright, Cypress, Selenium)
- Demonstrated experience designing, maintaining, and scaling test automation frameworks and CI/CD quality gates
- Strong understanding of distributed systems, HTTP lifecycle, and database querying (SQL/NoSQL)
- Experience mentoring engineers and leading quality initiatives with measurable outcomes
- Working knowledge of secure testing practices (OWASP Top 10) and performance testing strategy
- Familiarity with compliance, privacy, or regulated industry SaaS environments
- Experience implementing performance/load testing strategies
- Experience with accessibility testing strategy and tooling