Kiewit is building a modern data platform to support analytics and expand their use of AI and machine learning. They are hiring multiple Data Engineers to help build reliable data pipelines and curated datasets that power reporting, analytics, and data products.
Responsibilities:
- Build and support data pipelines that ingest, transform, and publish data for analytics and data products
- Develop well-structured datasets using modern modeling and testing practices (e.g., dbt)
- Orchestrate and monitor workflows using tools like Dagster (or similar)
- Improve reliability through automation, testing, documentation, monitoring, and operational best practices
- Partner with engineers and stakeholders to understand needs and deliver high-quality data solutions
Requirements:
- 4+ years of experience in data engineering, analytics engineering, software engineering, or a related role (or equivalent practical experience)
- Experience with Python and SQL in production environments
- Experience with or strong interest in learning Azure and Databricks
- Experience with or strong interest in learning Dagster and dbt
- Understanding of lakehouse / data lake architecture concepts and how data is curated for analytics use cases
- Strong communication skills and ability to work effectively with cross-functional teams
- Experience with AI-assisted software engineering
- Experience with Spark-based processing, lakehouse table formats, and performance optimization
- Familiarity with CI/CD, Git-based workflows, and production support/on-call practices