Lead the hands-on technical implementation and deployment of Viaduct’s software solutions within large-scale enterprise environments.
Make critical data integration decisions and build robust connectors across a variety of data types and systems (SQL, NoSQL, APIs, etc.).
Create and support batch, incremental, and real-time data pipelines to ensure high-quality data ingestion from client systems.
Set up and deploy LLM and agentic systems in practice. You will be responsible for prompt engineering, performance testing, and iterating on agent behavior to ensure reliability and accuracy.
Establish and automate validation and cleaning processes to ensure data quality across various client integrations.
Work directly with customers across a variety of contexts to troubleshoot technical hurdles and ensure successful solution adoption.
Requirements
Proven track record of delivering software implementations at a large enterprise software company (e.g., Salesforce, ServiceNow, Oracle, C3 AI, Palantir) or a leading consulting firm (e.g., Accenture, Deloitte, McKinsey).
Very good at Python and ideally SQL.
Direct experience setting up and deploying LLM-based systems and agentic frameworks in practical, real-world settings. You know how to test and iterate on prompt performance.
4+ years of experience in data engineering or implementations, including experience with workflow schedulers (Airflow, Prefect, Argo, etc.) and distributed file systems (S3, HDFS, etc.).
Experience with incremental or real-time processing (Delta Lake, Apache Hudi, Kafka, Spark Streaming).
You have done a number of implementation projects across a variety of relevant contexts and value "getting it done" over theoretical perfection.
A degree in Computer Science or a related field is helpful, but we value a history of successful, scrappy implementation projects over fancy credentials or advanced degrees.