STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
Beware of scams. S3 never asks for money during its onboarding process.
Job Title: Lead Cloud Data Platform Engineer (AI/ Data Engineering)
Contract Length: 12+ Month contract
3 days in office/ 2 remote
Locations: IRVING, TX 75039/ Charlotte, NC/ Chandler, AZ/ Columbus, OH/ Des Moines, IA/ Minneapolis, MN
Ref# 246165
We are seeking a Lead Cloud Data Platform Engineer to design and build modern, AI-enabled data platforms in a large-scale data environment. This role focuses on developing cloud-native data solutions, enabling advanced analytics, and supporting the evolution of data capabilities within a hybrid cloud ecosystem.
The engineer will play a key role in migrating on-premise systems to cloud platforms while leading the design and hands-on implementation of scalable data processing solutions. This position operates within an agile, collaborative environment and partners closely with engineering and product teams to deliver high-impact data capabilities.
Key Responsibilities
- Implement and operationalize AI-enabled data capabilities on cloud platforms to ingest, transform, and distribute data for large-scale applications
- Leverage AI and agent-based frameworks to automate data management, governance, and data consumption processes (data pipelines, data quality, metadata, compliance)
- Collaborate with principal engineers, product managers, and data engineers to roadmap, plan, and deliver prioritized data capabilities within a matrix organization
Required Qualifications
- Experience working in large data environments
- Demonstrated recent experience with AI tools and frameworks including LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and MCP for agent-based data capabilities
- 5+ years of data engineering experience with hands-on cloud data solutions, including Spark-based ingestion and processing
- 3+ years of experience with data lakehouse architecture and design, including hands-on experience with Python, PySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, DataProc, and Cloud Composer
- Hands-on experience developing data flows using Kafka, Flink, and Spark Streaming
Desired Qualifications
- Experience using AI for code generation, prompt engineering, or context engineering
- Strong background in cloud-based data lakes, data warehouses, and automated data pipelines
- Public cloud certifications (Google Cloud Platform, Azure, or AWS data engineering certifications)
- Experience with web-based UI development using React and Node.js