AirflowAmazon RedshiftAWSCloudJavaKafkaMicroservicesMongoDBTableauAIMachine LearningMLNLPLarge Language ModelsData EngineeringAnalyticsRedshiftS3BedrockKinesisRESTfulSalesforceCI/CDStrategic PlanningCustomer Success
About this role
Role Overview
Architect, build, and manage robust data pipelines and infrastructure that support large-scale AI and machine learning initiatives.
Develop and optimize integrations between core AI services, including AWS Bedrock, LLMs, and internal platforms (Salesforce, Tableau, Slack) for enhanced operational performance.
Collaborate with applied scientists and ML engineers to refine model deployments, feature engineering, and ensure seamless operationalization of predictive and conversational AI solutions.
Implement advanced data processing and real-time analytics workflows for conversational intelligence, sentiment analysis, and predictive troubleshooting.
Ensure high-quality data management, emphasizing security, compliance, data governance, and maintainability within AI-driven environments.
Drive tooling and infrastructure solutions that facilitate efficient testing, debugging, monitoring, and continuous improvement of AI applications.
Establish comprehensive observability frameworks (logs, metrics, alerts) to maintain system reliability, performance, and operational insights.
Participate actively in technical discussions, code reviews, and strategic planning to align AI infrastructure development with business goals.
Proto-type and develop intuitive, user-friendly UI screens and dashboards that enable customer success teams to leverage AI-driven insights effectively.
Requirements
3 -7 years of professional experience focused on data engineering, specifically supporting AI, ML, or NLP-driven systems.
Deep expertise in cloud technologies (AWS highly preferred), including hands-on experience with AWS Bedrock, Redshift, MongoDB, and S3.
Proficiency in Java, Sprintboot, and experience with orchestration tools (Airflow, Prefect) for managing complex AI-driven workflows.
Demonstrated experience integrating and operationalizing Large Language Models (LLMs) and machine learning systems within production environments.
Strong understanding of microservices, RESTful API development, and real-time data streaming (Kafka, Kinesis).
Robust experience with observability tools and CI/CD pipelines, ensuring the reliability and continuous deployment of AI services.
Exceptional problem-solving abilities, comfortable navigating ambiguity, and adept at collaborating across technical and business-focused teams.