Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. They are seeking a visionary VP of Customer Engineering to lead a globally distributed team focused on AI infrastructure and edge computing, ensuring rigorous pre-sales technical lifecycle management and driving customer engagement.
Responsibilities:
- Build & Scale a Global Customer Engineering Organization
- Lead, coach, and develop a globally distributed team of Customer Engineers spanning North America, EMEA, and emerging markets
- Define and execute a global hiring strategy: build CE presence in new regions, establish operating rhythms, onboard early hires, and set standards for technical excellence worldwide
- Create talent development pathways that grow CEs into senior AI infrastructure architects and future leaders
- Build a culture of continuous learning around AI infrastructure, edge computing, and real-world deployment at scale
- Drive AI-Focused Technical Discovery & Solution Architecture
- Champion a rigorous, AI-first discovery methodology guiding CEs to uncover customer mission goals, AI workload requirements, data sovereignty constraints, and connectivity realities across diverse global environments
- Ensure the team consistently translates complex, distributed AI environments into validated edge architectures built around Armada's Galleon modular data centers, Atlas platform, and GPU-accelerated edge AI stack
- Define and govern solution design standards for AI inference, real-time analytics, and edge ML pipelines in bandwidth-constrained and disconnected environments
- Elevate Global Pre-Sales Technical Quality
- Set and raise the bar on discovery outputs, AI architecture designs, technical narratives, demo environments, and proof-of-value success criteria worldwide
- Standardize technical qualification frameworks ensuring AI infrastructure opportunities are well-scoped, feasible, and commercially validated before deep engineering engagement
- Develop a global review cadence and peer architecture process to maintain consistency and quality across all regions
- Partner Cross-Functionally to Accelerate Global Revenue
- Collaborate tightly with regional Sales leaders, Product, Engineering, and Global Deployment teams to align on AI infrastructure positioning, competitive differentiation, and customer roadmaps
- Bridge technical architectures to measurable customer outcomes articulating ROI, operational efficiency, and AI-driven value creation across energy, defense, telecommunications, and industrial verticals
- Synthesize global customer insights to inform Armada's AI product roadmap, hardware evolution, and platform strategy
- Build Scalable AI Infrastructure Methodologies & Playbooks
- Develop globally consistent reference architectures for AI inference at the edge, GPU cluster deployments, satellite-connected operations, and hybrid cloud-edge patterns
- Create repeatable frameworks for AI proof-of-value pilots, technical discovery, and competitive positioning across Armada's key verticals
- Enable regional CE teams with localized deployment guides, regulatory considerations, and partner ecosystem alignment — ensuring global consistency while preserving local agility
Requirements:
- Bachelor's degree in Computer Science, Electrical Engineering, Systems Engineering, or equivalent technical field
- 7+ years leading Customer Engineering or Solutions Architecture teams in pre-sales; demonstrated success hiring and scaling globally
- 7–10+ years of hands-on pre-sales or solutions engineering experience in AI infrastructure, edge computing, datacenter, or distributed systems
- Deep expertise in GPU and AI accelerator infrastructure: NVIDIA GPU architectures, AI inference frameworks (TensorRT, ONNX, vLLM), and edge AI platforms
- Strong grounding in datacenter and edge infrastructure: compute (GPU, bare metal, virtualization), storage (SAN/NAS/Object/NVMe), networking (LAN/WAN/SD-WAN/SATCOM), and facility systems (power, cooling)
- Hands-on experience with container orchestration (Kubernetes), virtualization (VMware, KVM, Hyper-V), and cloud service models (IaaS, PaaS, hybrid)
- Proven ability to engage and influence C-level technical and operational leaders across global enterprise and government customers
- Willingness to travel internationally, including to remote and operationally austere field sites
- Experience deploying or architecting AI solutions in oil & gas, defense & intelligence, utilities, telecommunications, or mining verticals
- Hands-on exposure to modular, containerized, or mobile data center deployments — including skid-based and rapid-deploy form factors
- Familiarity with edge AI inference optimization, model quantization, and deployment frameworks for bandwidth-constrained environments
- Background integrating OT/IT convergence — connecting sensors, IIoT devices, and SCADA systems to AI-enabled edge platforms
- Experience with satellite and hybrid connectivity architectures (Starlink, LEO, VSAT) for remote AI deployments
- International experience building CE teams or managing customer engagements in EMEA, APAC, or Middle East markets
- Certifications in AI/ML (e.g., NVIDIA DLI), cloud infrastructure (AWS, Azure, GCP), or datacenter design (CDCP, DCDC, RCDD)
- Experience collaborating with construction, facilities, and deployment partners on large-scale infrastructure projects