IS3 Solutions is seeking an AI Forward Deployed Engineer to deploy, integrate, and operate AI solutions in real customer production environments. This role requires a hands-on, customer-facing individual who excels at problem-solving in high-stakes environments and acts as a bridge between data science, engineering, and operations teams.
Responsibilities:
- Lead customer-facing deployments of AI solutions from initial integration through production launch and stabilization
- Integrate AI systems into existing customer architectures, including applications, APIs, data pipelines, and infrastructure
- Design and support AI inference architectures optimized for scalability, resiliency, latency, and cost
- Troubleshoot complex production issues across model, application, infrastructure, and data layers
- Build automation and scripts to accelerate deployments, testing, diagnostics, and operational workflows
- Implement and tune observability, including monitoring, logging, tracing, and alerting for AI systems
- Perform performance tuning and capacity planning for inference workloads
- Support deployments across cloud, on-prem, and hybrid environments, adapting solutions to customer constraints
- Apply security-aware implementation practices, including identity and access management, secrets handling, and data protection
- Act as a technical bridge between data science, engineering, and operations teams, ensuring smooth handoffs and shared understanding
- Serve as a technical escalation point during deployments and early production operations
- Document deployment patterns, operational runbooks, and best practices to improve repeatability and reliability
- Provide feedback from real-world deployments to inform product and platform improvements
Requirements:
- Strong experience deploying and operating production systems, ideally with AI or ML components
- Proven ability to troubleshoot and resolve complex system issues under time pressure
- Experience designing or supporting AI inference pipelines
- Proficiency in automation and scripting (e.g., Python, Bash, or similar)
- Solid understanding of observability and performance tuning in distributed systems
- Experience working across cloud, on-prem, or hybrid infrastructure environments
- Working knowledge of security best practices for production systems
- Strong communication skills and comfort working directly with customers and cross-functional teams
- Experience with LLM-based systems, RAG architectures, or agent-based workflows
- Familiarity with containerization, orchestration, and CI/CD pipelines
- Background in SRE, platform engineering, DevOps, or systems engineering
- Experience operating in fast-moving or ambiguous environments
- Prior customer-facing technical or forward-deployed role experience