Amazon RedshiftAWSAzureCloudD3.jsHadoopInformaticaJavaJavaScriptKafkaMongoDBMySQLNode.jsOpen SourceOraclePostgresPySparkPythonReactSparkSQLRAIMachine LearningDeep LearningData EngineeringAnalyticsRedshiftDatabricksGoogle CloudGluePostgreSQLSQL Server
About this role
Role Overview
Lead and deliver data-driven decision-making initiatives
Utilize advanced analytical techniques to extract insights from large datasets
Motivate, develop, and inspire team members to deliver quality work
Manage performance and coaching responsibilities for team members
Engage in difficult conversations with clients and stakeholders
Uphold professional and technical standards as per firm’s code of conduct
Requirements
6 + years of experience in data engineering or architecture with an emphasis on data analytics and reporting
Proficiency in developing data pipelines on cloud platforms such as Azure, AWS or Google Cloud
6 + years of experience in design and development of large scale data solutions using data modelling/ data munging/ data management tool sets such as SQL Server, Oracle DB, Azure Synapse, Redshift, Alteryx, Databricks, Azure Data factory, AWS Glue, Informatica, Collibra, Denodo etc.
Experience building data processing pipelines to integrate large datasets from multiple sources and formats
Experience designing and developing data pipelines from ingestion to consumption within big data architecture using Python/Java/SQL etc.
Proficiency in big data technologies and tools such as Spark/ PySpark , Hadoop, Hive, Kafka
Proficiency in common machine learning libraries in Python, Spark, and Deep Learning/AI frameworks considered an asset
Proficiency in any of the following programming languages, frameworks and technologies: JavaScript, React, Node.JS, Python, R, D3.JS, open source databases such as PostgreSQL, MySQL, MongoDB considered an asset