CloudETLRustSQLC++CELTData EngineeringS3GitVersion Control
About this role
Role Overview
Restructure and maintain existing data pipelines to increase operational flexibility, ensuring non-standard data formats and incoming project requests can be efficiently processed.
Provide dedicated support and maintenance for current data tooling to eliminate workflow bottlenecks.
Lead the next phase of data infrastructure by designing and building advanced tools for granular data visualization and manipulation.
Implement and manage robust metadata frameworks to provide a comprehensive understanding of existing data assets.
Monitor, troubleshoot, and resolve issues within the data architecture to ensure reliability and performance.
Requirements
Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, or a related quantitative field or equivalent practical experience.
Proven track record in data engineering, with specific experience in building, maintaining, and restructuring complex data pipelines (ETL/ELT).
Strong programming skills in C++ and advanced proficiency in SQL.
Experience with Rust is advantageous; however, candidates without prior rust experience will be considered provided they demonstrate a strong technical aptitude and a willingness to learn on the job.
Hands-on experience with cloud object storage, specifically Amazon S3, and version control systems (git) for managing codebase changes.
Familiarity with modern data processing frameworks and database management systems.
Familiarity with metadata management practices and data visualization concepts.
Strong analytical skills with a proven ability to identify bottlenecks and optimize systems for speed, quality, and scale.