Brown and Caldwell is a full-service environmental engineering and construction services firm seeking a Data & AI Platform Engineer to enhance their cloud platform and AI/data infrastructure capabilities. The role focuses on developing and maintaining data pipelines, collaborating with various teams, and ensuring data quality and compliance with industry standards.
Responsibilities:
- Design, create and maintain data pipelines to collect, clean, transform, and load data from various sources, including sensor data, historical records, and geospatial information to facilitate data warehousing
- Collaborate with interdisciplinary teams of environmental engineers, data scientists, and software developers to understand data requirements and develop scalable data solutions
- Participate in the design of and execute the creation and management of data warehouses, data lakes, and databases to ensure efficient data storage, retrieval, and management
- Develop, deploy, execute, and monitor ETL (Extract, Transform, Load) processes to support data analysis, visualization, and machine learning model training
- Develop and maintain data models and engage in SQL database management and querying with the objective of efficiently handling stored data
- Design and execute testing plans for data pipeline and data warehousing implementation efforts
- Implement processes for improving data quality and managing data governance for enhanced reliability and accessibility
- Collaborate with IT infrastructure and cybersecurity teams to implement and operate data pipelines within approved data infrastructure, performance, and security guidelines
- Design and execute processing tasks using Python and maintain up-to-date understanding of big data processing frameworks
- Perform regular data audits and updates to ensure high level of data accuracy and integrity
- Flexibility to adapt and execute various additional assignments based on evolving needs
Requirements:
- Depth in platform engineering, DevOps/MLOps, and cloud infrastructure
- Ability to operationalize AI systems with rigor
- Experience in designing, developing, and maintaining data pipelines and architectures
- Experience in data modeling, ETL development, and performance tuning of data systems
- Ability to troubleshoot and resolve data-related issues with a focus on data quality and accessibility
- Experience in architecting pipelines in compliance with data security and industry standards
- Understanding in building and optimizing data pipelines, architectures, and data sets
- Strong working SQL knowledge and skills in implementing and managing relational databases
- Proficient in ETL processes creation and management and techniques for data cleaning and validation
- Proficient in Python and other scripting languages applicable for data engineering
- Proficient with best practices for writing clean, maintainable, and scalable code while applying software engineering best practices including use of version control systems (e.g., Git)
- Demonstrated abilities with data warehousing solutions, data lake solutions, and cloud platforms
- Typically, a minimum of 5 years of data engineering or related experience
- Typically certified in BC's SMS Framework and progressing through the SMS competencies
- A degree in data engineering, computer science, information technology, or related field or equivalent experience is required
- Hands‑on experience supporting production LLM‑ or RAG‑based systems in a platform, data, or MLOps capacity
- Familiarity with LLMOps practices and operational tooling
- Exposure to analytics platforms and integration‑heavy systems, including APIs and workflow orchestration tools
- Experience deploying and operating AI‑enabled or analytics-heavy services in Docker‑based containerized runtimes on managed cloud platforms
- Familiarity with geospatial data and analysis, such as ESRI ArcGIS, PostGIS, or geopandas
- Interest or experience in environmental, water resources, or scientific computing domains