Miratech is a global IT services and consulting company that supports digital transformation for large enterprises. The Senior Data Engineer leads the design and maintenance of scalable data architecture and pipelines, ensuring data integrity and guiding the data engineering team.
Responsibilities:
- Design, build, and scale robust ETL pipelines to support complex data workflows while ensuring high performance, reliability, and adaptability to evolving business needs
- Automate and manage data ingestion from diverse sources (databases, APIs, cloud platforms), ensuring system resilience, fault tolerance, and failover readiness
- Optimize data storage, processing, and retrieval layers to balance performance, scalability, and cost efficiency across the data platform
- Modernize and enhance legacy data systems by identifying gaps, implementing architectural improvements, and aligning solutions with future business requirements
- Lead technical excellence within the data engineering function through mentorship, code reviews, best-practice enforcement, and adoption of advanced tools and frameworks
- Ensure end-to-end data quality, integrity, and governance by implementing validation, monitoring, testing, and compliance-focused data controls
- Collaborate cross-functionally with analytics, product, engineering, DevOps, and business stakeholders to translate requirements into scalable data models and transformations
- Drive a data-driven culture and long-term data strategy by enabling self-service analytics, maintaining clear documentation, leading training initiatives, and contributing to architecture roadmaps and governance policies
Requirements:
- 7+ years of experience in IT with 5+ years of hands-on experience in Data Engineering
- Bachelor's degree in Data Engineering, Computer Science, Data Analytics, or a related field is required. Master's degree preferred
- Advanced proficiency in Python and SQL, with proven experience in ETL pipeline development
- Experience with cloud data platforms such as AWS, GCP, or Azure, including cloud-native data engineering tools and services
- Strong understanding of modern data architecture patterns, including batch processing, streaming, and event-driven systems, along with industry best practices
- Demonstrated ability to optimize data workflows, troubleshoot complex data issues, and ensure high performance, scalability, and reliability of data systems
- Strong project management skills, with the ability to work independently, manage priorities, and deliver high-quality outcomes in a fast-paced environment
- Hands-on experience with modern data platforms and tools such as data warehouses/lakehouses (Snowflake, Redshift, BigQuery), big data or streaming frameworks (Spark, Kafka), and data orchestration tools (Airflow or equivalents)
- Previous experience working with large-scale healthcare or insurance organizations