Work as part of an agile team to provide application support for specified tech products & services
Provide first line support & basic optimization, administration, configuration, maintenance & trouble shooting
Manage changes on datasets and optimisation activities on all applications
Support the applications on the Hadoop ecosystem
Identify optimisation opportunities
Build enhancements and new developments
Requirements
2+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
Familiarity with Hadoop ecosystem and its components
Good knowledge of the concepts of Hadoop
Solid experience in a working environment in Big Data development utilising SQL or Python
Experience in Big Data development using Spark
Experience in Hadoop, HDFS and MapReduce
Experience in database design, development and data modelling
Tech Stack
Apache
Hadoop
HDFS
Java
Kafka
MapReduce
Python
Scala
Spark
SQL
Benefits
Good knowledge in back-end programming, specifically java
Experience with development in a Linux environment and its basic commands
Ability to write reliable, manageable, and high-performance code
Should have basic knowledge of SQL, database structures, principles, and theories