JMD Technologies Inc. is seeking a Data Engineer to design and optimize scalable data pipelines and support large-scale data processing initiatives. This role focuses on building efficient ETL workflows and leveraging distributed systems to drive data reliability and performance.
Responsibilities:
- Develop and maintain ETL pipelines for large-scale data processing
- Build solutions using Python and Apache Spark
- Implement and optimize data models and distributed computing frameworks
- Apply best practices in data structures , algorithms , and design patterns
- Collaborate with cross-functional teams to support data-driven initiatives
Requirements:
- 5–10 years of experience in data engineering or related field
- Strong expertise in Python, Apache Spark, and R
- Hands-on experience with ETL processes, data modeling, and distributed systems
- Solid understanding of software design patterns and algorithms
- Bachelor's degree in Computer Science, Computer Engineering, or related field