Assist in developing and maintaining data pipelines and integrations that support business processes and analytics use cases.
Support the design and implementation of data workflows, partnering with engineers and analysts to understand requirements and translate them into technical solutions.
Write and maintain clean, modular code in Python and SQL to support data ingestion, transformation, and integration tasks.
Assist in testing data pipelines, including unit testing and validation, to ensure data quality and reliability.
Support CI/CD processes by helping validate and deploy data solutions in development and staging environments.
Monitor data pipelines and assist in troubleshooting issues, identifying root causes, and implementing fixes.
Participate in agile ceremonies including sprint planning, standups, and reviews to support team collaboration and learning.
Document data processes, workflows, and technical solutions to support team knowledge sharing and maintainability.
Requirements
Currently pursuing a Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field;
Basic understanding of programming concepts, preferably in Python;
Familiarity with SQL and working with data in relational databases;
Foundational knowledge of data structures, data processing, or ETL concepts;
Exposure to cloud platforms such as AWS, Azure, or GCP through coursework or projects is preferred;
Strong problem-solving skills and willingness to learn new technologies;
Ability to work in a collaborative, team-oriented environment;
Strong verbal and written communication skills;
Proficiency in Microsoft Office Suite (Excel, Word, PowerPoint);
Ability to work up to 40 hours per week during the internship duration;