Design, develop, and maintain data pipelines, integrations, and data processing solutions within enterprise data platforms.
Contribute to end‑to‑end delivery of data engineering initiatives, including requirements understanding, development, testing, deployment, and production support.
Assist in the implementation and adoption of Data Virtualization, Data Quality frameworks, Metadata Management, and Data Lineage capabilities.
Develop scalable and reusable data integration patterns to support batch and near real‑time data processing use cases.
Support modernization of legacy data processes by reducing manual interventions and improving automation and monitoring.
Follow and help enforce enterprise data standards, best practices, and governance guidelines to ensure data quality, security, and compliance.
Collaborate with architects, platform teams, governance, security, and business stakeholders to deliver aligned data solutions.
Troubleshoot and support performance, reliability, and scalability issues in production data platforms.
Participate in technical discussions, design reviews, and continuous improvement initiatives across the data engineering team.
Requirements
Any Graduation / Post Graduation
6 to 8 years experience.
Solid understanding of data engineering fundamentals, including data ingestion, transformation, integration, orchestration, and data lifecycle management.
Strong proficiency in SQL and good programming skills in Python for building and maintaining data pipelines.
Hands-on experience with data integration and ETL/ELT tools such as Informatica, supporting batch and near real-time processing.
Working knowledge of data virtualization concepts, with exposure to platforms such as Starburst (Trino/Presto) preferred.
Experience with relational databases such as Oracle and familiarity with data warehouses and data lake architectures.
Understanding of data modeling concepts, including conceptual, logical, and physical models (dimensional and normalized).
Exposure to distributed data systems, APIs, and messaging or event-driven integration patterns.
Basic to intermediate understanding of Data Quality, Metadata, and Data Lineage principles.
Familiarity with at least one cloud platform (Azure, AWS, or GCP) and cloud-based data engineering practices.
Working knowledge of DevOps practices, CI/CD pipelines, version control systems, and automation for data solutions.
Awareness of data governance, security, privacy, and compliance requirements in enterprise environments.
Strong collaboration and communication skills, with the ability to work effectively in cross-functional teams.
Tech Stack
AWS
Azure
Cloud
ETL
Google Cloud Platform
Informatica
Oracle
Python
SQL
Benefits
Competitive benefits to support physical, emotional, and financial well-being.