Kiewit Corporation is a leading construction and engineering company, and they are seeking a Data Engineer to support enterprise-scale project data platforms. The role involves designing, building, and optimizing modern data pipelines and cloud-based data architectures while collaborating with various teams to deliver reliable and scalable data solutions.
Responsibilities:
- Design, build, and maintain scalable ETL/ELT pipelines using Azure Databricks (PySpark, Spark SQL) as the primary platform
- Develop and optimize batch, streaming, and micro-batch processing architectures
- Implement data ingestion workflows using Azure Data Factory, Azure Data Lake, SQL Server, and Snowflake
- Design data models that support analytics, reporting, and database-driven application integration
- Optimize performance of large-scale data systems, including query tuning and workload optimization
- Support day-to-day operations of enterprise data platforms
- Troubleshoot end-to-end data pipeline issues across ingestion, transformation, storage, and consumption layers
- Implement data quality checks, validation rules, and monitoring processes
- Contribute to architectural standards, governance frameworks, and documentation
- Partner with project teams to understand complex business rules and translate them into scalable data solutions
- Support visualization and analytics teams by delivering clean, structured, high-performance datasets
- Contribute to continuous improvement of data processes, standards, and best practices
- Stay aligned with evolving Kiewit data strategies and emerging industry technologies
Requirements:
- 6+ years of experience in Data Engineering, Data Analytics, Data Warehousing, or related roles
- Experience in Azure-based architectures
- Strong proficiency in advanced SQL development and performance optimization
- Hands-on experience building ETL/ELT pipelines in a cloud environment
- Familiarity with EPC / construction industry workflows highly preferred
- Experience with Databricks or Spark-based distributed data processing
- Experience troubleshooting complex data workflows across multiple systems
- Strong understanding of data modeling principles (relational and dimensional)
- Experience documenting functional requirements, technical designs, and data workflows
- Excellent written and verbal communication skills
- Familiarity with CI/CD and version control practices (Git)
- Experience implementing data quality frameworks and governance processes
- Experience supporting enterprise-scale analytics environments