Build components of large-scale data platform for real-time and batch processing, and own features of big data applications to fit evolving business needs
Build next-gen cloud based big data infrastructure for batch and streaming data applications, and continuously improve performance, scalability and availability
Contribute to the best engineering practices, including the use of design patterns, CI/CD, code review and automated test
As a key member of the team, contribute to all aspects of the software lifecycle: design, experimentation, implementation and testing
Collaborate with team to develop software architectures which are robust, fault-tolerant, and cloud-native
Requirements
Hold a Bachelor’s degree in Computer Science, Computer Information Systems, Engineering, or another technical field
5+ years of experience in big data application development in a large enterprise environment using Java, Scala, Python, including familiarity with modern frameworks and platforms (e.g., Spark, Flink, Singlestore, Kafka, Nifi and AWS big data technologies)
Demonstrated ability with cloud infrastructure technologies, including Terraform, K8S, Spinnaker, IAM, ALB, etc.
A strong interest in open source technology
Knowledge of building highly available and scalable services
Possess good communication skills and enjoy helping others to succeed as a team
Tech Stack
AWS
Cloud
Java
Kafka
Kubernetes
Open Source
Python
Scala
Spark
Spinnaker
Terraform
Benefits
A bonus and/or long-term incentive units may be provided as part of the compensation package
Full range of medical, financial, and/or other benefits