Per Scholas is a nonprofit organization focused on providing tech training to individuals and communities. They are seeking a Data Engineering Instructor to deliver high-quality instruction in Data Engineering, manage student performance, and facilitate hands-on labs while integrating AI tools into the curriculum.
Responsibilities:
- Deliver High-Quality Instruction: Lead daily live technical training in-person, covering the full Data Engineering lifecycle including Linux, Python, SQL, Big Data frameworks, and Cloud technologies. You must be prepared to seamlessly pivot to Zoom-based remote instruction if required by business contingencies
- Manage Student Performance: Monitor individual progress, grade assessments (KBAs via Canvas, SBAs via HackerRank), manage attendance, and provide one-on-one tutoring for learners who need additional support
- Facilitate Hands-On Labs: Guide students through practical exercises, troubleshooting code issues in real-time, and overseeing the final Capstone Project
- Leverage AI Tools: Integrate Google Gemini (primary) and other tools (ChatGPT, GitHub Copilot) into the classroom to demonstrate code generation, debugging, and explaining complex technical concepts
- Curriculum & Platform Support: Maintain lesson plans and manage external learning platforms, including enrolling and tracking learner progress in Coursera for the Google Data Analytics Professional Certificate modules (specifically Tableau)
- Administrative Duties: Update the LMS (Canvas, Salesforce) and communication channels (Slack) with attendance, grades, and progress notes daily
Requirements:
- Bachelor's Degree in Computer Science, Data Science, or a related field
- OR Graduate of a rigorous Technical Bootcamp (Per Scholas alumni preferred)
- OR Equivalent Industry Experience in Data Engineering, Data Analytics, or Software Development
- Minimum 1 year of teaching, training, or mentoring experience in a technical environment
- Proficiency in using Google Gemini to assist in coding, debugging, and instructional workflows
- Proficiency in Unix/Linux command line navigation and basic shell scripting
- Experience with Git and GitHub for version control and collaboration
- Advanced proficiency in Python, including data manipulation and analysis using NumPy and Pandas
- In-depth knowledge of Relational Databases, SQL programming, and database design principles
- Practical experience with Apache Spark, Spark SQL, and ETL/ELT data warehousing processes
- Strong proficiency in Tableau (required for specific curriculum modules) and Python libraries like Matplotlib/Seaborn
- Experience with AWS services relevant to data analytics (e.g., Redshift, Athena, QuickSight, AWS Blueprint)
- 3+ years of experience in adult education or bootcamp-style training
- Familiarity with Docker and Kubernetes
- Knowledge of CI/CD tools and processes such as Jenkins
- Familiarity with Hadoop frameworks
- AWS Certified Data Analytics, Google Data Analytics Professional Certificate, or Databricks Certified Associate