Gravie is focused on creating beneficial health solutions for small and midsize businesses. They are seeking an AI Test Automation Engineer to design and develop AI-driven test automation systems that enhance automated regression coverage and streamline delivery timelines.
Responsibilities:
- Design and build AI-driven test automation workflows using tools like Claude Code, where AI agents generate, execute, and iterate on Playwright tests with minimal human intervention. Your primary job is building the system, not writing individual tests
- Architect a Playwright test framework from scratch, establishing patterns, page-object models, fixtures, and conventions that are optimized for AI-assisted test generation. The framework should be structured so that AI tools can reliably produce correct, maintainable tests
- Build tight feedback loops between local test execution, CI pipeline results, and merge request creation, so AI agents can run tests, interpret failures, fix issues, and resubmit with minimal manual intervention
- Audit the existing Geb/Spock/Groovy test suites across three portals (Member, Employer/Broker, Admin) and classify each test into one of three categories: convert to Playwright, convert to an API or unit test, or delete (legacy functionality no longer in use)
- Lead the migration from Geb/Spock to Playwright, using AI agents to accelerate conversion at scale while maintaining coverage continuity
- Integrate Playwright test execution into GitLab CI/CD pipelines with automated quality gates on every merge request and deployment. Optimize for fast feedback through parallelization and caching strategies
- Instrument tests with reporting on pass/fail trends, flakiness rates, and coverage gaps. Use this data to prioritize AI-driven test generation toward the highest-value uncovered areas
- Identify scenarios where end-to-end Playwright tests are not the right tool and implement API-layer automation or contract tests for microservice boundaries instead
- Continuously improve the AI automation workflow by refining prompts, context structures, and validation patterns to increase the percentage of tests that AI generates correctly on the first pass
Requirements:
- 5+ years of experience in test automation engineering, SDET, or software engineering with strong testing practices
- Demonstrated experience using AI coding tools (Claude Code, Cursor, GitHub Copilot, or similar) to generate and maintain code at scale. This is not optional; we need someone who has already built workflows around AI-assisted development
- Hands-on experience with Playwright or a comparable modern end-to-end testing framework. Experience building a framework from scratch is strongly preferred
- Proficiency in TypeScript or JavaScript with the ability to write production-quality code
- Experience with CI/CD pipelines (GitLab preferred), including writing pipeline configuration, test stages, and merge request workflows
- Solid understanding of REST API testing and when to use API-level tests vs. UI-level tests
- Experience migrating legacy test automation to a modern framework, including planning the transition and maintaining coverage during the migration
- Strong communication skills: able to explain technical trade-offs to non-technical stakeholders and write clear documentation
- A builder mindset. You think in terms of systems, feedback loops, and automation leverage, not individual test cases
- Comfort working in ambiguity. Our AI-driven approach is new, and the right person will help define what good looks like
- Experience with Groovy, Spock, or Geb (the legacy framework you will be migrating from)
- Experience with contract testing (Pact or equivalent)
- Background in health insurance, benefits administration, or regulated industries
- Experience building or contributing to internal developer tooling or productivity platforms