Kroll is a global leader in risk and financial advisory solutions, seeking a high-performing Senior Data Engineer to build and scale their data infrastructure. The role involves designing and implementing production-grade data pipelines and collaborating with senior engineers and data scientists to enhance analytics and AI capabilities across the firm.
Responsibilities:
- Design and build scalable organizational data infrastructure and Medallion architecture within a Lakehouse environment
- Develop robust, fault-tolerant ETL/ELT applications for seamless data ingestion, transformation, and distribution to enable analytics, reporting, and AI workloads
- Work with different stakeholders and teams to assist with data related technical solutions and support their data infrastructure needs
- Explore and experiment with new use cases, frameworks, and tools to enhance AI capabilities, ensuring data integrity, quality, and reliability
- Identify and implement infrastructure re-designs to improve scalability, optimize data delivery, and automate manual workflows
- Choose the best tools/services/resources to build robust data pipelines
- Collaborate with cross-functional teams to understand data requirements, create robust data models, and deliver actionable insights
- Monitor, troubleshoot, and optimize jobs for performance, addressing data pipeline bottlenecks and ensuring cost efficiency
- Continuously improve engineering processes, balancing speed, quality, and business impact
- Coach, mentor, and provide technical guidance to junior engineers, fostering a culture of continuous learning and development
- Stay updated on emerging technologies and trends in data engineering, recommending and implementing innovative solutions
Requirements:
- Bachelor's or master's degree in computer science, engineering, or a related field
- 5+ years of proven experience in data engineering, delivering business-critical software solutions for large enterprises with a consistent track record of success
- Experience writing ETL/ELT jobs
- Experience with Azure and Databricks Platform
- Experience with Python, SQL, and REST APIs
- Excellent communication and the ability to reason about trade-offs
- Ability to work with an international team
- Cloud architecture principles: compute, storage, networks, security, cost
- Proficiency in using open-source tools, frameworks like FastAPI, Pydantic, Polars, Pandas, Delta Lake, Docker, Kubernetes
- Knowledge of CI/CD, Git, or infrastructure-as-code concepts
- Strong project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously in an Agile environment
- Understanding of how data engineering feeds into Business Intelligence and reporting tools (Power BI/Tableau)
- Strong problem-solving and analytical skills
- Strategic thinker and strong execution orientation
- Ability to work in cross-functional teams
- Attention to detail and data quality