Define and own end-to-end technical architecture for a dual-mode agentic AI system across offline (edge) and cloud-connected (AWS Bedrock) environments
Drive architectural decision-making across agentic orchestration, data synchronization, voice and chat interfaces, and cloud infrastructure; author and maintain Architecture Decision Records (ADRs)
Act as the primary technical authority within the Engineering POD and the key technical interface with client stakeholders
Bring an AI-forward development practice to every phase of the engagement — using tools like Claude, Cursor, and agentic coding assistants as force multipliers across implementation, code review, and technical documentation
Guide and align the engineering team across AI/ML, full stack, and integration workstreams
Take direct implementation ownership across cloud infrastructure provisioning, local SQLite data layer, cloud data mirror, STT/TTS pipeline integration, and REST API surface
Build and configure the AWS environment end-to-end — including Bedrock, Guardrails, storage, and connectivity services
Implement the edge-to-cloud data synchronization strategy between offline local storage and AWS-hosted data stores
Contribute to agentic orchestration implementation, integration testing, and iterative deployment; support incoming team members with onboarding and context transfer
Design the agentic orchestration layer with tool-use, contextual memory, and conversational interaction patterns for both offline and connected modes
Select an edge LLM based on prior experience with comparable hardware profiles; configure AWS Bedrock with Guardrails for connected-mode reasoning and content safety
Collaborate with the AI/ML Engineer on prompt engineering, model evaluation, and proactive recommendation logic
Ensure architectural consistency and integration quality across voice (STT/TTS) and chat modalities on a shared backend
Provide technical quality assurance and architectural governance across all POD workstreams
Lead cross-functional engineering coordination across AI/ML, full stack, product, and UX tracks
Drive iterative delivery cadence — shipping testable software at each phase review checkpoint and supporting investor-facing demonstrations and POC readouts
Bring an AI-forward mindset to your daily work, using tools like Claude, Cursor, and other modern AI assistants to ship higher-quality work at pace.
Requirements
6+ years of professional software engineering experience, with 2+ years in architectural leadership roles
Proven, hands-on experience with agentic AI systems — including orchestration frameworks, tool-use patterns, and conversational memory architectures
Direct experience with AWS and AWS Bedrock, including environment provisioning, Guardrails configuration, and integration of cloud-native services and conversational AI systems (STT/TTS)
Experience designing and implementing offline-capable or edge AI systems — including local model inference, constrained runtime environments, and embedded data layers such as SQLite
Proficiency in at least one modern backend language commonly used in AI and cloud-native stacks (e.g., Python, TypeScript/Node.js)
Experience building and shipping proof-of-concept or innovation-driven technical programs under real time constraints
Demonstrable usage of AI-forward tools such as Claude and Cursor.