EverCommerce is a leading service commerce platform focused on digitally transforming the service economy. They are seeking a Data Engineer II to design and scale their modern data platform, supporting analytics and AI-powered insights.
Responsibilities:
- Design, build, and operate scalable batch and streaming data pipelines
- Develop and orchestrate workflows using Apache Airflow
- Implement transformations and analytics-ready datasets using DBT
- Build and maintain real-time pipelines using Kafka
- Leverage Databricks for data processing, analytics, and AI enablement
- Support AI and GenAI use cases, including enabling high-quality data access for tools like Databricks Genie
- Design and optimize data storage using Apache Iceberg and Lakehouse architecture
- Ingest and manage data from diverse internal and external sources using Fivetran
- Handle a wide variety of data structures (structured, semi-structured, and event-based data)
- Build and maintain a semantic layer that enables trusted reporting and self-service analytics
- Implement data quality frameworks, monitoring, and unit test automation to ensure reliability at scale
- Partner with BI, product, and engineering teams to deliver data that is intuitive, trusted, and actionable
- Optimize performance, scalability, and cost across AWS services such as Athena, EC2, and related tooling
- Contribute to data platform standards, documentation, and best practices
Requirements:
- 5+ years experience in a Data Engineering position
- Strong experience with Python and SQL
- Hands-on experience with Apache Airflow
- Experience working with Databricks
- Expertise using DBT for transformations and analytics modeling
- Experience building streaming data pipelines with Kafka
- Experience with data ingestion tools such as Fivetran
- Working knowledge of Apache Iceberg and modern Lakehouse architectures
- Experience implementing data quality checks, testing frameworks, and pipeline observability
- Familiarity with AWS services including Athena, EC2, and cloud-based data platforms
- Strong understanding of data modeling, analytics, and semantic layer design
- Experience enabling AI or GenAI use cases on top of analytics platforms (e.g., Databricks Genie)
- Experience delivering self-service BI solutions (e.g., ThoughtSpot)
- Knowledge of data governance, metadata management, and data catalogs
- Experience supporting SaaS or multi-product platforms
- Familiarity with privacy, compliance, and secure data access patterns