Jecona is a series B startup specializing in AI agents that assist Fortune 500 companies with various tasks. They are seeking a Lead Quality Engineer to enhance quality across their voice AI platform by defining standards, improving testing strategies, and leading a team of QA Engineers.
Responsibilities:
- Serve as the company-wide leader for quality engineering, shaping how quality is built into Replicant’s platform, championing a high bar across teams, and driving the practices and execution that make it real
- Lead, mentor, and support a team of 5 AI-enabled QA Engineers, raising the technical bar and guiding the team toward a more scalable quality engineering practice, with the opportunity to grow into a people management role
- Lead Replicant’s AI-native quality engineering strategy to scale coverage, accelerate validation, and surface failures earlier
- Build AI-driven testing workflows that let a small team deliver broad, scalable validation across the platform
- Own the quality feedback loop from internal dogfooding, customer A/B testing, customer feedback triage, and bug reporting, turning real-world usage into faster fixes, better test coverage, and stronger product decisions
- Partner with Product and Engineering early to improve testability, shape acceptance criteria, and build quality in from the start
- Build and improve automation frameworks, validation tooling, and CI/CD quality gates for reliable, repeatable releases
- Use AI and analytics to expand edge-case coverage, summarize failures, and generate insights that inform roadmap and release decisions
Requirements:
- 8+ Years experience in QA, Quality Engineering, or SDET roles, including ownership of test strategy and release quality in complex software systems
- An AI-native approach to quality engineering — you know how to leverage AI to move faster, increase coverage, reduce manual effort, and build smarter testing and validation workflows
- Curiosity about AI systems, conversational experiences, and the nuances of testing voice-driven product behavior
- Experience leading quality initiatives across teams and influencing engineers, product managers, and stakeholders around a shared quality bar
- Deep knowledge of modern testing practices, including automation, API and integration testing, regression strategy, and CI/CD-based quality workflows
- Hands-on technical ability to build tooling, frameworks, scripts, or systems that make quality scalable
- Demonstrated mastery of test design, risk-based coverage, and what should be automated versus explored manually
- Comfort using data, metrics, and analytics to evaluate quality and communicate trends, risk, and opportunities for improvement
- Excellent collaboration and communication skills, with the ability to work effectively across Engineering, Product, and customer-facing teams
- Hands-on experience in at least one modern language such as TypeScript, JavaScript, or Python
- Experience testing AI and LLM-powered systems, including evaluating non-deterministic behavior, hallucinations, prompt or workflow regressions, and other quality issues unique to generative products
- Experience working in an FDE, delivery, or professional services environment, with a strong understanding of customer-facing implementation workflows and real-world quality challenges