AWSAzureCloudCyber SecurityGoogle Cloud PlatformLinuxPyTorchTensorflowAIMLLLMLarge Language ModelsAgenticTensorFlowGCPGoogle CloudSageMakerBedrockVertex AINetwork Security
About this role
Role Overview
Developing guidelines for the usage, control, maintenance and audit-readiness of information and computer resources that are used in the distributed processing environment.
Analyzing and addressing customer security requirements for all business applications existing on a distributed platform.
Assisting in the evaluation, selection, and installation of security software products for distributed platforms.
Document and formally report testing initiatives, along with remediation recommendations and validation.
Conduct research into real-world threat actor tactics, techniques, and procedures (TTPs) and apply that knowledge to red team exercises.
Assess new technologies, software applications, and devices for potential avenues of exploitation.
Develop exploits based on identified vulnerabilities.
Develop scripts, tools, or methodologies to enhance red team processes.
Work with teammates to consistently learn and share advanced skills and foster team excellence.
Drive architectural evolution to enable scalable, secure AI deployment.
Develop and operationalize custom ML/LLM models for cybersecurity use cases.
Implement AI-driven threat hunting, anomaly detection, and automated incident response.
Advance preventative, detective, and responsive security controls through applied AI innovation.
Partner with offensive security teams to strengthen red teaming using AI and adversarial simulations.
Evaluate emerging AI capabilities for cybersecurity and mature predictive, proactive threat response.
Requirements
Bachelor’s degree in computer science or related fields; applicable specialized training; or equivalent work experience
Certified Red Team Operator (CRTO), CRTE, GRTP, or other red team certifications desired
Understanding of one or more compliance frameworks: NIST, FFIEC, GLBA, SOX, PCI, etc.
5-7 years of experience conducting penetration-testing/red team engagements
Experience in planning and executing advanced attacks that evade network and endpoint security controls to demonstrate the potential adverse impact caused by a threat actor
Experience with implementing red team assessment methods, tools, and techniques
Experience in developing, extending, or modifying exploits and offensive security tools (shellcode, implants, reflective loaders, etc.), as well as operational experience exploitation, lateral movement, and persistence on Windows and Linux systems, bypassing preventative and detective endpoint and network security controls, C2 frameworks (Cobalt Strike and Mythic).
Experience in AI red teaming, including adversarial testing, prompt injection analysis, and vulnerability assessments of large language models
Hands-on experience with agentic AI, LLMs, and custom ML model development
Familiarity with enterprise cloud AI platforms (e.g., Azure AI Foundry, AWS Bedrock, or GCP Vertex AI)
Experience leading large-scale technical initiatives, including security pipelines, data management, model deployment, and automation
Deep understanding of offensive security techniques and AI-enhanced threat modeling
Experience with AI-enhanced SOAR, adversarial ML, and AI security risk assessment
Proficiency with open-source AI frameworks and tooling (e.g., PyTorch, TensorFlow, SageMaker)
Tech Stack
AWS
Azure
Cloud
Cyber Security
Google Cloud Platform
Linux
PyTorch
Tensorflow
Benefits
Comprehensive health and wellness benefits
Retirement plans
Educational assistance and training programs
Income replacement for qualified employees with disabilities