EXLdata.ai is EXL’s flagship multi-agent orchestration platform that automates the end-to-end lifecycle of enterprise data pipelines. They are seeking a Forward Deployment Engineer who will be responsible for ensuring successful deployment and scaling of EXLdata.ai within client environments, working closely with client teams and product engineering.
Responsibilities:
- Deploy EXLdata.ai in client-owned AWS/Azure/GCP environments
- Configure networking, security, CI/CD, Kubernetes, API gateways, and identity integration
- Troubleshoot environment, infra, IAM, and pipeline-related issues
- Lead cloud-level optimizations (scaling, cost, performance tuning)
- Build, customize, and optimize data pipelines using PySpark, SQL, Databricks, Snowflake, or native hyperscaler data services
- Integrate platform agents into client workflows (Data Migration, DQ, DataOps, Annotation)
- Assist client SMEs in onboarding data sources, targets, and transformations
- Serve as the technical anchor for first-of-kind deployments at each client
- Ensure clients see measurable value from agent-driven automation (SLA reduction, pipeline acceleration, DQ uplift, migration speed)
- Provide hands-on support across discovery, configuration, runbooks, and UAT
- Work with product engineering on integrating new GenAI agents into client pipelines
- Tailor agent behaviors, triggers, and workflows for domain-specific use cases
- Share field insights that shape our agent roadmap
- Act as the “voice of the customer” for the EXLdata.ai product team
- Identify enhancements, feature gaps, and new accelerator ideas
- Participate in internal sprints, tooling improvements, and platform hardening
- Support deployments in EXL-hosted private cloud environments
- Serve as the first line of operational excellence for premium clients
- Lead operational reliability, monitoring, and support SLAs
Requirements:
- 6–12+ years as a Senior Data Engineer, Forward Deployment Engineer, or Platform Engineer
- Strong hands-on experience with at least one hyperscaler (AWS or Azure or GCP)
- Deep expertise in: PySpark, SQL, Python
- Databricks / Snowflake (one mandatory, both preferred)
- Cloud data services (Kinesis, Glue, Redshift, Synapse, BigQuery, DataProc, etc.)
- Kubernetes, Docker, CI/CD
- IAM, VPC, private networking, secrets, API management
- Demonstrated ability to work directly with client engineering teams
- Comfortable running design discussions, debugging sessions, and deployment workshops
- Strong communication skills; able to simplify technical topics for business audiences
- Ability to operate independently with a consulting mindset and ownership mentality
- Exposure to LLMs, agent tooling (LangChain, LangGraph, CrewAI, etc.), or willingness to learn fast
- Strong interest in how AI can automate data engineering and governance
- Can-do attitude; thrives in ambiguity
- Fast learner; bias for action
- Team player who collaborates across product, engineering, and client teams
- Customer-first orientation and passion for delivering measurable outcomes