Design, train, and refine large-scale 3D generative models from covering pre-training, post-training, and emerging paradigms in diffusion, flow matching, and multi-modal learning.
Bridge the gap between cutting-edge research and product, deploy models in real products used by millions of creators, using human feedback and creative evaluation.
Create novel model architectures to make 3D generation faster, higher-quality, and more controllable.
Collaborate with infrastructure and systems teams to build scalable training, and data pipelines across GPU clusters and cloud environments.
Bring engineering discipline into an fast-paced research environment: elegant code, reproducible experiments, and building software as a team.
Share insights and breakthroughs through internal demos, open-source contributions, or technical reports that advance the field of 3D generative AI.
Requirements
Strong engineering skills in Python and deep learning frameworks (preferably PyTorch); comfortable moving between research prototypes and production systems.
Familiar with Transformers and modern generative AI models (Diffusion / flow matching, VAE, etc.).
Curiosity and passion for multi-modal AI, and have an intuitive understanding of how models perceive, represent, and generate 3D worlds.
Familiar with high performance training on large scale infrastructure (e.g., SLURM, Ray, k8s) is a plus.
Contributions to popular open-source machine learning projects or publications in top-tier CV / ML conferences is a plus.
Tech Stack
Cloud
Kubernetes
Python
PyTorch
Ray
Benefits
We value intelligence and the pursuit of knowledge.
We care deeply about our work, our users, and each other.
We trust our instincts and are not afraid to take bold risks.