Home
Jobs
Saved
Resumes
Intermediate Software Engineer – Artificial Intelligence, AI at Tucows Domains | JobVerse
JobVerse
Home
Jobs
Recruiters
Companies
Pricing
Blog
Jobs
/
Intermediate Software Engineer – Artificial Intelligence, AI
Tucows Domains
Website
LinkedIn
Intermediate Software Engineer – Artificial Intelligence, AI
Canada
Full Time
1 hour ago
$100,350 - $111,500 CAD
No H1B
Apply Now
Key skills
AWS
Cloud
Kubernetes
Postgres
Python
SQL
Go
Golang
AI
Machine Learning
LLM
Llama
Ollama
RAG
LangChain
Hugging Face
LangGraph
Pinecone
Weaviate
EKS
PostgreSQL
RESTful
Communication
Collaboration
About this role
Role Overview
Design and build AI-driven features for our domain services platform using Python and Golang.
Integrate and fine-tune open-source models such as LLaMA 3.2 and similar cutting-edge architectures via tools like Ollama.
Research, evaluate, and implement emerging AI technologies that align with our vision for smarter, more intuitive products and services.
Collaborate with internal stakeholders and fellow engineers to rapidly prototype and iterate on machine learning and LLM-based features.
Contribute to a modern AI development stack, ensuring scalability, performance, and ethical usage of models.
Actively participate in the open-source ecosystem and bring relevant tools and techniques back to the team.
Requirements
Bachelor’s degree in Software Engineering, Computer Science, or a related field
3+ years of professional software engineering experience in production environments
Strong proficiency in Python and Golang
Solid foundation in software design principles, patterns, and service-oriented architecture
Experience contributing to scalable systems and component-level architecture
Ability to design and build RESTful APIs for model serving and AI-enabled workflows
Working knowledge of relational/SQL databases (preferably PostgreSQL) and data modeling for AI use cases
Strong understanding of modern LLM concepts, including transformer architectures and attention mechanisms
Hands-on experience adapting and deploying open-source models (e.g., LLaMA, Mistral, Mixtral) using tools like Ollama or Hugging Face Transformers
Experience with fine-tuning techniques (e.g., LoRA, QLoRA, PEFT) for domain-specific adaptation
Proficiency in prompt engineering (few-shot, chain-of-thought, structured outputs)
Familiarity with model serving patterns for efficient, scalable inference
Experience designing and implementing Retrieval-Augmented Generation (RAG) pipelines end-to-end
Hands-on experience with vector databases (e.g., pgvector, Pinecone, Weaviate)
Familiarity with embedding models, chunking strategies, and semantic search patterns
Understanding of data pipelines for ingestion, transformation, and inference result storage
Familiarity with Model Context Protocol (MCP) server design patterns
Experience with agent orchestration frameworks (e.g., LangChain, LangGraph)
Understanding of tool use, function calling, and multi-step reasoning in LLM workflows
Experience with LLM evaluation frameworks (e.g., RAGAS, promptfoo, or custom pipelines)
Familiarity with observability and tracing tools (e.g., LangSmith, Helicone)
Comfort with structured logging, metrics, and alerting for AI workloads
Experience with containerization and cloud-native deployment (preferably AWS)
Familiarity with Kubernetes or EKS for scaling model-serving workloads
Understanding of GPU considerations for inference (quantization, batching, memory trade-offs)
Active interest in the open-source AI ecosystem
Strong collaboration and communication skills across technical and business teams
Enthusiasm for emerging AI technologies with a practical, delivery-focused mindset.
Tech Stack
AWS
Cloud
Kubernetes
Postgres
Python
SQL
Go
Benefits
Generous benefits
Fair compensation
Apply Now
Home
Jobs
Saved
Resumes