Utilize advanced analytical techniques to extract insights from large datasets
Leverage skills in data manipulation, visualization, and statistical modelling to support clients in solving complex business problems
Develop a deeper understanding of the business context and how it is changing
Interpret data to inform insights and recommendations
Work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution
Requirements
4 + years of experience in data engineering or architecture with an emphasis on data analytics and reporting
Experience in using Azure, AWS, or Google Cloud services to analyze data, build data models and generate visualizations
4 + years of experience in design and development of large scale data solutions using data modelling/ data munging/ data management tool sets such as SQL Server, Oracle DB, Azure Synapse, Redshift, Alteryx, Databricks, Azure Data factory, AWS Glue, Informatica, Collibra, Denodo etc.
Proficiency in big data technologies and tools such as Spark/ PySpark , Hadoop, Hive, Kafka
Experience building data processing pipelines to integrate large datasets from multiple sources and formats
Experience designing and developing data pipelines from ingestion to consumption within big data architecture using Python/Java/SQL etc.
Experience with building data pipelines in streaming and batch mode
Experience with database management, migration and transformation to cloud native DBs like Cloud SQL, Spanner, etc.
Experience in development using message queues (cloud pub sub), stream processing, highly available and fault tolerant techniques
Experience with Git and CI/CD pipelines
Tech Stack
Amazon Redshift
AWS
Azure
Cloud
Hadoop
Informatica
Java
Kafka
Oracle
PySpark
Python
Spark
SQL
Benefits
Competitive compensation package
Inclusive benefits
Flexibility programs that will help you thrive in work and life
Opportunities to participate in variable incentive pay programs