Collaborate with Data architects, Enterprise architects, Solution consultants and Product engineering teams to capture customer data integration requirements, conceptualize solutions & build required technology stack
Collaborate with enterprise customer's engineering team to identify data sources, profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating customer data sources and third party data sources with Adobe solutions
Develop new features and improve existing data integrations with customer data ecosystem
Encourage team to think out-of-the-box and overcome engineering obstacles while incorporating new innovative design principles.
Collaborate with a Project Manager to bill and forecast time for customer solutions
Develop and improve features that incorporate LLMs, AI agents, and/or multi-agent orchestration for dynamic data integration, workflow automation, and real-time business value.
Requirements
Experience as an enterprise Data Engineer from a consulting background
AWS Certified Data Engineer – Associate or AWS Certified Cloud Practitioner
10+ years experience in building/operating/maintaining fault tolerant and scalable data processing integrations using AWS
10+ years experience in Python programming language preferably using PySpark
Software development experience working with Apache Airflow, MongoDB, MySQL
4+ years working with AWS AI/ML and agentic services such as SageMaker, Bedrock, Vector databases (OpenSearch, Pinecone)
Demonstrated experience (or significant exposure) designing, integrating, and scaling agentic AI systems
Strong capacity to manage numerous projects are a must
Experience using Docker or Kubernetes is a plus
BS/MS degree in Computer Science or equivalent industry experience
Ability to identify and resolve problems associated with production grade large scale data processing workflows