Work across the full stack depending on where you can drive the highest impact: from ETL pipelines to data warehousing to visualisations, as well as testing and cloud infrastructure.
Work with your team to design and implement features and services for the data analytics solution, and keep the design choices well-documented and explained
Train/mentor other Data engineers on Data engineering best practice.
Use your experience to create/improve predictive data quality
Champion the DataOps culture, support data systems in production, including participation in our out-of-hours on-call rota
Communicate with customers to discuss new use cases or help them identify and fix data issues, as well as manage the end-to-end setup for them: from raw data analysis, to defining and implementing data ingestion/enrichment pipelines to creating data visualisations
Requirements
Good understanding of Data pipeline design and implementation using Databricks, and Python (or Python derivative, like PySpark)
Good visualization skills using Sisense and/or other visualisation tools
Good experience with SQL
Good understanding of Data management and/or Data governance (making sure the data is of expected volume, schema, etc.)
Nice to have: Good understanding of microservices architecture principles
Experience with Kedro on Databricks
Exposure to AWS or other cloud provider
Exposure to Airbyte reading from multiple different data sources