Define clear, concise, and maintainable test cases from requirements, specifications, and user stories.
Use AI tools to assist with test case design and documentation, including generating test scenarios, expanding edge cases, and improving clarity and consistency.
Ensure coverage across positive, negative, boundary, and edge-case scenarios.
Organize and maintain automated test suites to reflect product changes.
Refactor outdated or redundant tests, ensuring accuracy and relevance.
Implement and utilize AI and intelligent agents to accelerate test generation.
Oversee maintenance and continuous improvement of existing frameworks.
Analyze failures/escapes, identify root causes, and implement solutions.
Drive a culture of quality through ongoing improvements in methodologies.
Design and develop reports to provide visibility into test suite performance.
Implement alerting systems for prompt failure notifications.
Requirements
Strong experience using Generative AI tools such as Claude, Gemini, or GitHub Copilot to: Generate test cases, Create automation code, Design test plans and workflows.
Hands-on expertise with LLM prompt engineering, including: Writing effective prompts, Using context windows and system prompts, Iteratively refining prompts for better test coverage and quality.
Experience with test automation frameworks (Selenium, Cypress, Playwright), with an emphasis on using AI to accelerate and enhance automation rather than writing everything manually.
Ability to design agent-based workflows, where AI is used to generate, validate, and evolve test scenarios.
Strong knowledge of JavaScript, TypeScript and/or Python, with the ability to: Review and audit AI-generated code, Validate correctness, performance, and maintainability.
Solid understanding of testing fundamentals, with a mindset focused on: “How to automate testing at scale”.