Moonlite delivers high-performance AI infrastructure for organizations running intensive computational research, large-scale model training, and demanding data processing workloads. The Senior Software Engineer will be instrumental in building out a GPU-accelerated compute platform, managing GPU clusters, and enabling researchers to access high-performance compute resources with cloud-like simplicity.
Responsibilities:
- Design and build scalable compute orchestration platforms that manage GPU clusters, bare-metal server provisioning, and resource allocation across co-located infrastructure environments
- Implement intelligent workload scheduling, resource allocation, and optimization algorithms that maximize GPU utilization while maintaining performance guarantees for research and training workloads
- Design and implement systems for provisioning and managing research computing environments including Kubernetes and SLURM clusters, enabling automated deployment, resource scheduling, and workload orchestration for distributed AI training and HPC workloads
- Develop platform capabilities for managing latest-generation NVIDIA GPU configurations (H100, H200, B200, B300), including GPU resource management, multi-tenant isolation, and integration with compute orchestration systems
- Build automation and tooling for complete bare-metal server lifecycle management – from initial provisioning and configuration through ongoing operations, updates, and resource reallocation
- Optimize compute platform components for high-throughput and low-latency performance, ensuring research workloads achieve near-bare-metal efficiency in virtualized or containersized environments
- Develop robust APIs and SDKs that enable researchers to programmatically provision and manage compute resources, integrating seamlessly with existing workflows and research infrastructure
- Implement comprehensive monitoring and telemetry systems for compute resources, providing visibility into GPU virtualization, workload performance and infrastructure health
- Build enterprise-grade multi-tenant compute isolation, security boundaries, and resource quotas that enable safe sharing of GPU infrastructure across teams and organizations
Requirements:
- 5+ years in software engineering with proven experience building compute platforms, container orchestration systems, or distributed compute infrastructure for production environments
- Strong background in building compute orchestration, resource scheduling, or workload management systems at scale
- Strong familiarity with Kubernetes architecture, container orchestration concepts, and experience deploying workloads in Kubernetes environments. Understanding of pods, deployments, services, and basic Kubernetes operations
- Experience with Go, C/C++, Python, or Rust for performance-critical components is highly valued
- Strong experience with Linux in production environments, including systems for programming, performance optimization, and low-level resource management
- Deep knowledge of virtualization technologies (KVM, Xen), container runtimes, and orchestration platforms
- Understanding of GPU architectures, CUDA programming (where/when needed), and GPU resource management – or a strong ability to learn quickly
- Experience with bare-metal provisioning, out-of-band management systems, and hardware abstraction layers
- Demonstrated ability to solve complex performance and scalability challenges while balancing pragmatic shipping with good long-term architecture
- Comfortable navigating ambiguity, defining requirements collaboratively, and communicating technical discussions through clear documentation
- Growth mindset with continuous focus on learning and professional development
- Background provisioning or managing research computing environments (Kubernetes, SLURM, or HPC clusters)
- Experience with GPU virtualization technologies (SR-IOV, NVIDIA vGPU) and multi-tenant GPU sharing
- Background in container orchestration platforms with custom scheduling or resource management
- Knowledge of high-performance networking for GPU communication (InfiniBand, RDMA, NVLink, NVSwitch)
- Familiarity with AI/ML training frameworks (PyTorch, TensorFlow) and their infrastructure requirements
- Understanding of distributed training patterns and multi-node GPU coordination
- Experience building infrastructure for research institutions, labs, or technical computing environments
- Background in financial services or other regulated industry infrastructure is a plus