Panasonic Automotive North America is a leading global supplier to Automotive Original Equipment Manufacturers, specializing in infotainment systems and advanced connected car solutions. They are seeking a Senior AI Full Stack Engineer to design and build production-grade AI-powered applications, collaborating with various teams to deliver AI-driven features across multiple platforms.
Responsibilities:
- Design and build end-to-end AI-powered product features — owning the full stack from React/Next.js UI through FastAPI/Node.js backend services to cloud infrastructure and LLM integrations
- Architect and implement LLM integration layers: connecting to OpenAI, Anthropic Claude, Google Gemini, Meta Llama, or other foundation models via APIs, fine-tuned endpoints, or on-device inference
- Build production-grade RAG (Retrieval-Augmented Generation) pipelines: document ingestion, chunking strategies, embedding generation, vector store management, and orchestrated retrieval for accurate, low-hallucination AI responses
- Develop multi-agent and agentic workflow systems using frameworks such as LangChain, LangGraph, CrewAI, or AutoGen — designing agent memory, tool use, planning loops, and goal decomposition
- Engineer prompt engineering strategies, guardrails, and context management systems that optimize LLM output for latency, cost, and quality at scale
- Build and maintain scalable microservices and event-driven backend architectures (Kafka, Redis, async queues) to handle high-throughput AI workloads and long-running agent tasks
- Design responsive, performant front-end experiences that elegantly surface AI capabilities — including real-time streaming responses (WebSocket/SSE), conversational UIs, AI-assisted dashboards, and multi-modal interfaces
- Establish observability and monitoring frameworks for AI production systems: model performance tracking, hallucination detection, token cost monitoring, latency profiling, and bias alerting
- Implement responsible AI controls at the application layer: input/output guardrails, content filtering, PII redaction, rate limiting, and audit logging for regulatory compliance
- Integrate AI features into automotive-domain applications including connected vehicle dashboards, IVI systems, manufacturing quality intelligence platforms, and supply chain optimization tools
- Collaborate with AI Architects to translate architecture blueprints into production code; provide engineering feedback that improves architectural decisions
- Champion engineering excellence: code reviews, automated testing (unit, integration, AI evaluation), CI/CD pipelines, and documentation for AI-enabled features
Requirements:
- Bachelor's degree in Computer Science, Software Engineering, or related technical field; Master's degree a plus
- 7+ years of professional full stack engineering experience with at least 2+ years building and shipping production AI/LLM-integrated features
- Proven track record delivering AI-powered products to real users at scale — prototypes do not count
- Expert-level proficiency in React and Next.js (App Router, SSR, SSG, streaming); TypeScript required
- Experience building real-time AI interfaces: streaming LLM responses via WebSocket or Server-Sent Events (SSE), conversational chat UIs, and multi-modal content displays
- Strong command of modern CSS, state management (Zustand, Redux Toolkit, or Jotai), and UI component libraries
- Strong Python backend development using FastAPI (preferred) or equivalent; experience building async, high-throughput REST and streaming APIs
- Solid understanding of microservices design patterns: event-driven architecture, message queues (Kafka, Redis Pub/Sub, Celery/Taskiq), and fault-tolerant distributed systems
- Database proficiency: PostgreSQL, MongoDB, and Redis for caching and session management
- Hands-on production experience integrating LLM APIs: OpenAI GPT-4o, Anthropic Claude, Google Gemini, Meta Llama, or Mistral
- Deep expertise in RAG architecture: document processing, embedding models, chunking strategies, semantic search, vector databases (Pinecone, Weaviate, Chroma, pgvector, Qdrant)
- Experience with agentic AI frameworks: LangChain, LangGraph, LlamaIndex, CrewAI, AutoGen, or OpenAI Agents SDK
- Strong prompt engineering and context engineering skills; experience designing multi-turn conversations, tool-calling workflows, and structured LLM output parsing
- Experience implementing LLM guardrails, hallucination mitigation, and output validation for production systems
- Strong experience with at least one major cloud platform: AWS, Azure, or GCP; familiarity with managed AI/ML services (AWS Bedrock, Azure OpenAI Service, Vertex AI)
- Containerization and orchestration: Docker and Kubernetes; experience with Helm charts and cloud-native deployments
- CI/CD pipelines for AI-enabled products: automated testing, model evaluation gates, and zero-downtime deployments
- AI observability tooling: LangSmith, Weights & Biases, Helicone, or Arize for LLM tracing, cost tracking, and quality monitoring
- General observability: OpenTelemetry, Prometheus, Grafana, or Datadog for distributed tracing, metrics, and alerting