Finvi is a fast-paced, technology-transforming company seeking a Software Engineer to contribute to their next-generation unified AI platform. The role focuses on hands-on greenfield development and deep systems integration to build AI agents for complex enterprise workflows.
Responsibilities:
- Build components of Finvi’s unified AI platform that serve multiple products and business functions
- Contribute to greenfield development of AI-driven systems within architectural direction set by senior engineers
- Implement LLM and MCP-based integrations for consistent context handling and tool invocation
- Partner with platform, backend, and product teams to integrate AI capabilities into Finvi’s existing systems
- Build LLM-powered features that operate across enterprise workflows, data sources, and tools
- Build and evolve Retrieval-Augmented Generation (RAG) pipelines, including embedding strategies, vector search, and indexing across many internal and external data sources
- Optimize prompt engineering, orchestration logic, and agent memory/context strategies for accuracy and reliability
- Evaluate and integrate LLMs and AI tooling with a focus on performance, cost, and enterprise suitability
- Build systems that meet high standards of security, robustness, and reliability
- Build AI solutions that are traceable, well-logged, resilient to failure and edge cases, and safe for use in a regulated debt-collection environment
- Collaborate with security and compliance teams to ensure adherence to applicable regulations and data-handling requirements
- Contribute to best practices in AI engineering, testing, monitoring, and deployment
- Partner closely with product managers and stakeholders to translate business problems into AI-driven solutions
- Contribute to the AI team’s roadmap and platform direction as priorities evolve
Requirements:
- 3–6 years of professional software engineering experience, with a backend or platform focus
- Strong experience building production systems in Java (core language of the platform)
- Hands-on experience working with LLMs, including prompt engineering, orchestration, and evaluation
- Experience building systems that integrate with multiple data sources and search targets
- Solid understanding of distributed systems, APIs, data pipelines, and system reliability
- Strong engineering fundamentals: testing, observability, performance tuning, and secure coding
- Communicates clearly and effectively across technical and non-technical stakeholders, including platform, product, and security partners
- Demonstrates a practical, execution-oriented learning style, quickly applying new concepts to real production use cases
- Adapts approach based on feedback, failures, and evolving priorities while maintaining momentum
- Holds a high bar for engineering quality, especially given the business-critical and regulated nature of the platform
- Experience designing or implementing RAG architectures or pipelines
- Experience building or integrating AI agents or autonomous/semi-autonomous systems
- Familiarity with Model Context Protocol (MCP) or similar agent/tool interface standards
- Experience with vector databases, embeddings, and large-scale search/indexing systems
- Exposure to regulated or compliance-heavy domains (FinTech, healthcare, legal, debt collection, etc.)
- Experience deploying AI systems in enterprise SaaS environments
- Hands-on experience with AWS, Azure, GCP, or OCI