Develop and maintain our Python-based web scraping pipeline, which automates online monitoring and enforcement across online channels.
Own and improve systems built on Google Cloud Platform, using Pub/Sub, Kubernetes, Docker, Compute Engine Instances and Bash scripting to manage distributed workloads at scale.
Collaborate closely with internal stakeholders, product owner, and backend engineers to translate real-world infringement problems into efficient automated workflows.
Set the standard for code quality, reviews, and testing within the automation/web scraping team.
Integrate with internal APIs (primarily Java-based), accessed through Python services.
Requirements
5 plus years of experience and Bachelor's Degree in Computer Science or related field
Familiarity with GCP, Kubernetes, Bash, and message-based architectures like Pub/Sub
Experience with Python, particularly for web scraping and automation pipelines
Solid habits around testing, code review, and code maintainability
Confidence and experience to set a technical example for other developers
Nice to have: Experience with browser automation tools (e.g., Playwright, Selenium)
Familiarity with CI/CD pipelines, logging, or monitoring
Any background in domains like brand protection, IP enforcement, or digital investigation is welcome but not required.