Tanium is a leader in real-time cloud-based endpoint management and security solutions. They are seeking an AI Security Engineer to develop and execute testing strategies for AI-driven solutions, ensuring their robustness and reliability while collaborating with cross-functional teams to integrate safety measures into AI development processes.
Responsibilities:
- Develop and execute AI benchmarking and safety testing strategies for various AI models and applications
- Design and implement testing protocols to assess the performance, accuracy, and safety of AI solutions
- Identify potential risks associated with AI technologies and propose mitigation strategies
- Collaborate with cross-functional teams to integrate safety measures into AI development processes
- Analyze testing results and provide detailed reports on AI model performance and safety
- Stay current with industry standards, regulatory requirements, and best practices for AI safety testing
- Drive continuous improvement in AI safety testing methodologies and tools
Requirements:
- Bachelor's degree in computer science, Engineering, or related field
- 8+ years of relevant work experience
- Expertise with common security testing methodologies
- Experience with regulatory compliance and industry standards for AI safety
- Familiarity with AI development frameworks and tools
- Strong understanding of AI and machine learning algorithms and their associated risks
- Experience with LLMs, AI Agents, and Natural Language Processing (NLP) techniques, with a focus on safety testing and benchmarking
- Strong analytical and problem-solving skills
- Excellent communication and collaboration abilities
- Detailed oriented individual with a deep drive to research the security of bleeding edge technology