Explore new AI model architectures to run optimally on an edge compute with memory constraints.
Drive breakthroughs by converting ANN models (such as CNNs and Transformers) to those that operate efficiently with reduced memory using paradigms like SNN, SSM, sparsity, or RNN
Support proof of concept advancing model retraining, optimization, and compression
Requirements
PhD Intern in electrical engineering, computer engineering, computer science, or a related field.
Deep understanding in AI model architecture, ANN/SNN/ SSM /Sparsity
Vast knowledge in Edge AI model Compression, Optimization, and Compilation
Proficient with multiple frameworks, including Snntorch, PyTorch, TVM, TensorFlow, CUDA, C/C++, and Python
Familiar with analog compute, in-memory compute, sparse compute, and neuromorphic systems.
Strong analytical mindset with the ability to balance performance, power, and memory tradeoffs.
Experience with the physical world, including vision, 3D point clouds, tactile, audio – Advantage
Tech Stack
Python
PyTorch
Tensorflow
Benefits
Various lunch-and-learn topics
Social events with other interns and full-time employees