Drive the execution of multiple business plans and projects by identifying customer and operational needs
Develop and communicate business plans and priorities
Provide supervision and development opportunities for associates by selecting and training, mentoring, assigning duties
Evaluate the ongoing effectiveness of current plans, programs, and initiatives
Promote and support company policies, procedures, mission, values, and standards of ethics and integrity
Consult with business partners, managers, co-workers, or other key stakeholders
Solicit, evaluate, and apply suggestions for improving efficiency and cost-effectiveness
Participate in and support community outreach events
Transform and evolve Walmart’s core data platform using Agentic AI
Develop data platforms that scale to billions of events per day
Influence multiple teams and organizations through technical leadership and clear architectural direction without direct authority
Mentor Staff and Senior Engineers across Walmart Global Tech AI and Data team
Design, build, test, and deploy cutting-edge solutions at scale, impacting multi-billion-dollar business
Collaborate with product owner and technical lead on overall delivery of assigned projects/enhancements
Requirements
Bachelor’s degree in computer science or related discipline with 12+ years’ experience
Minimum 10 years of experience in Big Data and distributed computing
Minimum 10 years of experience programming in Java, Scala, Python, Spring Boot & Node.js
Strong experience with cloud‑native ecosystems (GCP), including BigQuery, Serverless, Pub/Sub, or equivalent
Expertise in batch and streaming technologies (Kafka, Spark Structured Streaming, Flink, Druid, etc.)
Experience working with hybrid architectures that support both real‑time operations and analytical workloads
Strong understanding of semantic modeling, embeddings, knowledge graphs, and vector indexing
Experience supporting RAG, context‑aware AI, and agent orchestration through data platform design
Ability to reason about schema design, latency, storage formats, and their impact on AI behavior and outcomes
Fluency in Python, Java, or Scala
Deep experience with Spark/PySpark and large‑scale SQL optimization
Strong systems thinking, performance tuning, and operational excellence mindset
Demonstrated ability to lead through influence in complex, matrixed organizations
Executive‑level communication skills, with the ability to connect technical strategy to business and customer impact
Proven experience building pipelines on Big Data Technologies/Stack – Hadoop, Spark, Hive, Presto, Kafka, Airflow Scheduler and GCP suite of data tools
Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
Strong knowledge of deploying and managing applications in GCP
Strong scripting skills to process large amounts of data and highly proficient in SQL.