Mogi I/O is a company focused on OTT, podcast, and short video applications, and they are seeking a highly experienced Data Engineer to design and optimize scalable cloud-based data platforms on AWS. The role emphasizes real-time and batch data processing to support enterprise analytics and reporting, enabling data-driven decision-making across the organization.
Responsibilities:
- Design, develop, and maintain scalable data pipelines using AWS services
- Build and manage S3-based data lakes for batch and streaming workloads
- Implement and optimize Snowflake data models for analytics and reporting
- Develop real-time data streaming pipelines using Apache Kafka
- Integrate enterprise systems using MuleSoft APIs and connectors
- Deploy and manage containerized data workloads on Amazon EKS
- Write, optimize, and maintain complex SQL queries using Amazon Athena
- Ensure data quality, reliability, and performance across pipelines
- Support reporting and visualization use cases using Snowflake and Tableau
- Collaborate with architects, analytics teams, and business stakeholders
Requirements:
- 12+ years of experience in data engineering or related roles
- Strong hands-on experience with AWS (S3, Athena, EKS, IAM, monitoring)
- Proven expertise in Snowflake data modeling, tuning, and reporting
- Experience building real-time pipelines using Apache Kafka
- Hands-on experience with MuleSoft for API-based enterprise integrations
- Experience deploying and managing workloads on Amazon EKS
- Strong SQL skills and query optimization using Athena
- Experience designing and operating S3-based data lake architectures
- Exposure to analytics and visualization tools such as Tableau
- Experience with Python or SQL-based ETL development
- CI/CD pipelines for data engineering workloads
- Infrastructure as Code using Terraform or CloudFormation
- Experience working with large-scale, high-volume data platforms
- Familiarity with monitoring, logging, and data reliability practices