Net2Source (N2S) is seeking an AI Security Engineer responsible for designing, securing, and deploying agentic AI systems within enterprise environments. This role involves a blend of security engineering, threat modeling, and applied AI system development to ensure intelligent agents operate reliably and securely.
Responsibilities:
- Architect security controls (identity, network, runtime isolation, sandboxing, policy enforcement) for agent-based AI systems
- Develop AI specific threat models addressing agent misbehaviour, adversarial prompts, data leakage, model tampering, and supply chain risks
- Evaluate third party AI tools, APIs, and agent frameworks for security compliance and risk
- Design and implement autonomous AI agents using LLMs, APIs, orchestration frameworks, and multi agent systems
- Build agent behaviour logic including tool use, routing, planning, fallbacks, and guardrails
- Prototype and iterate AI agents in production, refining reliability, safety, and output quality based on real world usage
- Develop security monitoring pipelines for agent executions and automate detection of anomalous or harmful agent behaviour
- Implement secure MLOps practices—including model lineage tracking, training data protection, and integrity controls
- Perform vulnerability assessments, penetration testing, and red teaming of AI agents and underlying infrastructure
- Work closely with AI research, product, engineering, cloud, and cybersecurity teams to ensure agents are performant, safe, and compliant
- Translate business workflows into agent behaviours through scoping, discovery sessions, and requirements definition
- Establish secure development standards for agentic AI systems and contribute to enterprise AI governance frameworks
- Publish internal best practices for agent security, including prompt security guidelines, LLM threat mitigation, and safe tooling patterns
Requirements:
- 8+ years in cybersecurity engineering, application security, or cloud security
- Hands on experience with LLMs, AI/ML pipelines, vector databases, orchestration frameworks (AutoGen, CrewAI, LangGraph, etc.)
- Strong programming background (Python required; Java/C++ optional)
- Expertise in threat modelling, identity & access management, secure API design, and network segmentation
- Familiarity with adversarial ML, model robustness testing, data poisoning defences, and model evaluation
- Experience deploying secure workloads in AWS/Azure/GCP
- Experience building autonomous agents or multi agent systems
- Knowledge of AI governance, safety, and responsible AI frameworks
- Background in cryptography, secure CI/CD pipelines, MLOps, and privacy preserving ML