Design and develop scalable, resilient offline indexing pipelines for Slack's search infrastructure
Collaborate with product engineering to conceptualize and build new features
Ensure high availability and reliability of a distributed search system
Produce high-quality results contributing heavily to team projects
Actively own team features or systems and define their long-term health
Deploy, configure, test, troubleshoot, maintain, and upgrade Solr clusters and environments
Plan resources and continuously optimize infrastructure and configuration of Solr clusters
Assist support team in triaging and resolving production issues
Collaborate with engineers and conduct code reviews
Improve engineering standards, tooling, and processes
Requirements
3+ years of experience
Extensive experience with distributed data processing frameworks such as Spark, Hadoop, Map/Reduce, or EMR
Experience with Solr, ElasticSearch, Lucene, or other search technology
Exposure to infrastructure/ops tools and frameworks such as Terraform, Chef, and Kubernetes
Proficient with functional or imperative programming languages — e.g., PHP, Python, Ruby, Go, C, or Java
Track record of developing high-availability, performant, and scalable systems accessing large volumes of data over multiple clusters across different geographical regions
Experience planning, implementing, and deploying software migrations and upgrades in a production environment
Excellent communication skills to explain complex technical concepts
Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience.