Own and evolve the core integration and data backbone that underpins GTM Analytics, Enterprise-grade AI tooling, and Business-critical data flows across Semrush.
Design and operate secure, scalable cloud services and data integrations used across regions and teams.
Play a key role in supporting post-acquisition integration work with Adobe.
Design, build, and operate production APIs and cloud services (REST and/or gRPC) on Google Cloud Platform.
Own Cloud Deployed Solutions/Scripts (Cloud Run, Docker, etc.) that power integration, provisioning, and internal enablement platforms.
Build and maintain secure, event-driven integration patterns (Pub/Sub, async workflows).
Own business-critical data integrations, including global compensation and finance-adjacent feeds with SOX relevance.
Design and enforce security and access boundaries (IAM, secrets, service-to-service auth, cloud ↔ on-prem connectivity).
Drive cost-efficient cloud execution, including batching, async processing, and pricing-aware architecture decisions.
Build and operate enterprise-grade AI services, with clear cost, latency, and quality trade-offs.
Design reliable data ingestion patterns that support analytics, executive reporting, and downstream consumers.
Act as primary owner for integration infrastructure and enterprise AI tooling for the RevOps team.
Act as secondary owner for analytics pipelines to ensure coverage and eliminate single points of failure.
Lead operational ownership: monitoring, alerting, incident response, root-cause analysis, and audit readiness.
Partner with Analytics Engineering and Analytics teams to ensure stable, well-governed data flows.
Enable Go-To-Market Engineering teams through reliable APIs, integrations, and automation primitives.
Work with Security, Finance, and Legal stakeholders on access control, auditability, and compliance-relevant systems.
Support global teams across regions and time zones.
Requirements
Core engineering experience
Senior backend / platform engineer with strong systems thinking
Proven experience owning production, business-critical systems end-to-end
Strong Python, Go, or JavaScript engineering background
Hands-on experience building and operating production data warehouse tables (BigQuery or similar), with strong SQL and a focus on reliability, performance, and AI-ready data design
API design experience with attention to contracts, versioning, and backward compatibility
Experience with event-driven and asynchronous architectures
Strong grounding in data engineering principles
Experience operating data pipelines that support executive, financial, or compensation reporting
Comfort working in environments with auditability, controls, and change discipline (SOX familiarity is a plus)
Hands-on experience with GCP (Cloud Run, Cloud Functions, Pub/Sub)
Hands-on experience with Google Cloud Container tools (Cloud Run, GKE, Artifact Registry, Docker)
Experience with IAM, least-privilege access, and secrets management
Infrastructure-as-code (Terraform or equivalent)
Experience with orchestration tools such as Apache Airflow is a plus.
Native, daily use of AI coding tools such as Claude Code, Codex, Cursor, or equivalent
Experience applying AI tools to production code development and refactoring, debugging and incident analysis, architectural trade-off evaluation
Experience assessing LLM cost economics
Demonstrated ability to reason about business impact, not just technical correctness
Experience working with or adjacent to Sales, RevOps, or Finance teams where data quality or availability affected revenue, incentives, or compensation.
Strong judgment balancing speed, cost, risk, and correctness.