Atlas Connections is focused on enabling security teams to perform operations efficiently to prevent breaches. The role involves building and optimizing high-performance data platforms to support efficient data ingestion and querying at scale, while collaborating with cross-functional teams to address data challenges.
Responsibilities:
- Design and implement scalable data ingestion pipelines, ensuring efficient and reliable data flow from diverse sources
- Build out core platform tooling, monitoring, and engineering solutions that the rest of the company can leverage for their own performant apps
- Collaborate with engineering teams to integrate data solutions into the broader platform, ensuring system-wide performance optimization
- Troubleshoot and enhance our data pipelines and ingestion systems, ensuring minimal downtime and high reliability
- Optimize performance in large-scale datasets by applying best practices in indexing, caching, and partitioning
Requirements:
- Solid programming skills in languages like Python, Golang, and NodeJS with experience in building scalable, high-performance data processing solutions
- Familiarity with cloud infrastructure (AWS, GCP, or Azure) and their respective data services (e.g., Redshift, BigQuery)
- Experience building data pipelines using tools such as Apache Kafka, AWS Kinesis, or similar technologies
- Knowledge of distributed systems and techniques for building fault-tolerant, high-availability platforms
- Strong understanding of various database and search technologies (e.g. SQL, ElasticSearch, MongoDB) and performance tuning for large datasets, using distributed query engines (e.g., Presto, Snowflake)
- Proven experience in monitoring and troubleshooting data platforms, using tools like Prometheus, Grafana, or Datadog
- Hands-on experience in containerization and orchestration (Docker, Kubernetes) for deploying scalable data services