Manage an assigned team through day-to-day support tasks
Oversee development plans for the team and provide mentorship to the team
Provide guidance and peer review
Support pipelines end to end
Build and deploy enhancements and new developments or new data pipelines
Identify and drive optimisation opportunities across the environment
Manage the handover of new applications ensuring that required standards and practices are met
Improvement on recovery time in case of prod failures
Test prototypes and oversee handover to the Data Operations teams
Attend and contribute to regular team and User meetings
Responsible for the actual coding or programming of Hadoop applications
High-speed querying
Requirements
3+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
Minimum one year experience with Scala programming language
Minimum one year experience managing a team
Cross domain knowledge
Familiarity with Hadoop ecosystem and its components
Good knowledge of the concepts of Hadoop
Solid experience in a working environment in Big Data development utilising SQL or Python
Experience in Big Data development using Spark
Experience in Hadoop, HDFS and MapReduce
Experience in database design, development and data modelling
Good knowledge in back-end programming, specifically java
Experience with development in a Linux environment and its basic commands
Understanding of Cloud technologies and migration techniques
Understanding of data streaming and the intersection of batch and real time data
Ability to write reliable, manageable, and high-performance code
Should have basic knowledge of SQL, database structures, principles, and theories