Publicis Groupe ANZ is a leader in marketing analytics, helping organizations prove how marketing drives revenue and enterprise value. They are seeking an Enterprise Engineer II to formulate and apply mathematical modeling, optimize data pipelines, and enhance data processes for decision-making and client delivery.
Responsibilities:
- Formulate and apply mathematical modeling and other optimizing methods to develop and interpret information that assists management with decision making, policy formulation, or other managerial functions
- Partner with Consultants and Modeling Experts to understand data needs for modeling and client delivery
- Design, operationalize, and optimize data pipelines that feed modeling engines and dashboards
- Automate repeatable workflows and ensure high data integrity throughout
- Architect and deploy ETL/ELT pipelines in cloud environments
- Work with structured and semi-structured data formats
- Manage and refine database schemas; optimize SQL queries for performance and clarity
- Build and maintain internal tools and data services using C#, SQL, and modern scripting languages
- Contribute to the development and scaling of modeling platforms and analytics accelerators
- Implement version control, deployment pipelines, and infrastructure-as-code practices
- Monitor, troubleshoot, and improve data processes across cloud-based environments
- Create and maintain documentation and technical specifications for engineering workflows
- Continuously improve development practices to enhance robustness, maintainability, and security
Requirements:
- Bachelor's degree in Computer Science, Information Systems or a related field followed by three years of experience in the job offered or three years of experience in any occupation in which the required experience was gained
- Design, optimize, and execute complex queries on relational databases like MySQL and SQL Server to extract, transform, and load data for analysis and reporting
- Utilize Azure Data Lake Gen2 to manage and store large-scale structured and unstructured data, ensuring efficient data retrieval
- Leverage AWS S3 to store and retrieve large datasets, providing reliable, secure, and scalable cloud storage solutions that support data processing and analytics
- Develop and maintain backend systems and internal tools using C# and .NET to automate data ingestion, streamline data transfers, and integrate data across different platforms
- Build interactive desktop applications with WinForms to visualize data and generate reports
- Automate data cleaning, transformation and workflow orchestration using Python and Bash scripts
- Use Python to set up environments for building libraries dedicated to data manipulation and analysis
- Use DuckDB for efficient, in-memory analytical queries on large datasets, enabling fast processing and exploration of data in local environments
- Utilize Snowflake to unload data into Parquet files and transfer these files to Azure Data Lake Gen2, enabling seamless data storage and processing
- Leverage Azure Managed Identity for secure, automated authentication and authorization to Azure resources, allowing for seamless integration and access control