Design and build scalable, reusable, and governed data pipelines and curated datasets on an AWS-based lakehouse architecture
Help evolve a transactional lakehouse using open table formats (e.g., Apache Iceberg)
Implement engineering standards that keep the platform interoperable across query engines and cloud providers
Work across batch and streaming, contribute to a medallion (raw/silver/gold) pattern, and ensure datasets are discoverable, trustworthy, and platform-agnostic through metadata, lineage, and governance integration.
Requirements
Minimum 7 to 10 years of experience in Data Engineering
Strong communication and stakeholder collaboration skills
B.Tech /M/Tech Any Specialization
Tech Stack
Apache
AWS
Cloud
Benefits
Enjoy your best years with our retirement savings plan
Have peace of mind and body with our health insurance
Make yourself a priority with flexible schedules, parental leave and our holiday purchase scheme
Drive forward your career through professional development opportunities
Achieve your personal goals with our Employee Assistance Programme.