The Hartford is an insurance company committed to making a difference and shaping the future. They are seeking a Principal Security Engineer to lead the security direction for AI/GenAI capabilities, partnering with technology leaders to implement advanced security measures.
Responsibilities:
- Partnering with key stakeholders and technology partners to provide leadership direction and support for our company’s continued GenAI priorities, bringing a security perspective that balances with business imperatives and delivery timeframes
- Designing and developing architectures, frameworks, and requirements for the secure consumption of AI / GenAI capabilities across various patterns and usages, including internally maintained models, as well as Software as a Service (SaaS) solutions
- Performing threat modelling and risk assessments against GenAI use cases, recommending security requirements, and monitoring adherence with guidance
- Working with development teams, data scientists and security professionals to design and implement security measures that protect AI models against various threats and vulnerabilities, including prompt injections, inference attacks, data poisoning, model thefts, and others
- Representing the organization in leadership discussions, risk governance councils, and various AI / GenAI working teams
- Leading the cybersecurity team’s efforts to continuously monitor, assess and evaluate emerging security technologies, partnering with the enterprise Innovation team to proactively identify and recommend potential new capabilities
Requirements:
- 5+ years' experience as a security professional with a focus on Security Architecture responsibilities related to cloud security, threat modelling, identity and management and authentication, network security, software engineering, cryptography, penetration testing, mobile security, and/or infrastructure services
- AI/ML Security Leadership: Proven expertise in securing Generative AI systems, with successful implementation of AI security frameworks
- Generative AI & LLMs: Hands-on experience leading AI/ML initiatives using large language models (LLMs) and platforms such as GCP Vertex AI, AWS Bedrock, SageMaker, ChatGPT, etc
- Cross-Platform AI Security: Deep knowledge of securing AI applications and platform products across major cloud providers (AWS, GCP, Microsoft Azure) and AI ecosystems, including CoPilot and other enterprise-grade LLMs
- Cloud Security Engineering: Experience Designing and deploying robust cloud security architectures for AI/ML workloads across AWS and Google Cloud
- Threat Modeling & Risk Mitigation: Subject matter expert in identifying and mitigating AI-specific attack surfaces and threats
- End-to-End AI Security Strategy: Demonstrated ability to lead the development and execution of comprehensive AI/ML security strategies, integrating secure model development, deployment, and monitoring practices
- Candidate must be authorized to work in the US without company sponsorship
- Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), and/or Cloud and AI-specific certifications are highly desirable