Silverchair is the premier independent platform partner for scholarly and professional publishers, dedicated to expanding the reach of the world’s most valuable knowledge. The Quality Assurance Engineer for ScholarOne is responsible for identifying standard test scenarios and writing and executing routine test scripts to ensure software meets specified standards for the global academic publishing community.
Responsibilities:
- Review user stories to understand acceptance criteria for ScholarOne platform features, working with development teams to clarify any questions or requirements
- Identify test scenarios based on user stories and functional requirements for manuscript processing and editorial workflow features
- Write comprehensive test scripts for both manual and automated testing of ScholarOne Manuscripts and Conferences functionality
- Execute test scripts for both new feature development and regression testing to ensure platform stability and reliability
- Document and track defects discovered during testing, providing detailed information to assist development teams in resolution
- Implement test automation and conduct wide range of quality control tests, trials, and analyses across ScholarOne systems and applications
- Draft, revise, and approve test plans and scripts to ensure alignment with quality standards and software development strategy
- Participate in agile development processes including sprint planning, daily standups, and retrospectives
- Collaborate with product managers, developers, and other stakeholders to ensure comprehensive test coverage for scholarly publishing workflows
- Verify that software solutions meet the specific needs of publishers, editors, authors, and reviewers in the academic community
- Conduct usability testing to ensure optimal user experience for diverse academic publishing stakeholders
- Support testing of integrations and API functionality that connects ScholarOne platforms with external systems
- Participate in release testing and deployment validation to ensure successful product launches
- Maintain test documentation and contribute to knowledge sharing within the QA team
- Stay current with testing methodologies and tools relevant to web-based applications and scholarly publishing workflows
- Maintain AI literacy and understanding of ethical AI applications in academic publishing and scholarly communications
- Demonstrate basic prompt engineering skills for effective AI tool utilization
- Exercise judgment in determining when AI assistance is and is not appropriate for tasks
- Ensure compliance with Silverchair's AI usage policies and data protection requirements
- Collaborate effectively with AI tools while maintaining human oversight and professional standards
- Utilize AI-powered test generation tools to create comprehensive test cases and scenarios for software validation
- Design and implement automated testing workflows that incorporate AI tools for efficient test execution and reporting
- Identify and detect hallucinations in AI-generated code to ensure accuracy and reliability of automated outputs
- Establish quality metrics for AI outputs to measure performance, accuracy, and effectiveness of AI-assisted testing processes
- Apply human judgment to evaluate and validate AI-generated test results, ensuring proper interpretation and decision-making
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related technical field
- 2+ years of experience in software quality assurance or testing roles
- Strong understanding of software testing methodologies, test case design, and defect management
- Experience with manual testing techniques and test script development
- Knowledge of software development lifecycle and agile development practices
- Proficiency with defect tracking tools and test management systems
- Strong analytical and problem-solving skills with attention to detail
- Excellent written and verbal communication skills for documenting issues and collaborating with development teams
- Understanding of web-based applications, user interface testing, and cross-browser compatibility
- Ability to work collaboratively in cross-functional development teams
- Basic understanding of test automation concepts and tools
- Experience in scholarly publishing, academic technology, or related knowledge management industries
- Background with workflow management systems, editorial platforms, or manuscript management solutions
- Understanding of academic publishing processes, peer review workflows, and scholarly communication
- Experience testing software applications used by academic institutions, scholarly publishers, or research organizations
- Knowledge of editorial workflows, submission systems, or conference management platforms
- Experience with test automation tools and frameworks for web applications
- Familiarity with API testing and integration testing methodologies
- Understanding of accessibility testing and compliance requirements for academic software
- Experience with performance testing and load testing for high-volume applications
- Background in organizations serving the academic publishing or scholarly communications industry
- Professional certifications in software testing or quality assurance