Axle is a bioscience and information technology company that offers advancements in translational research, biomedical informatics, and data science applications. They are seeking a Junior Software Engineer with a solid foundation in C++ to build scalable software systems for compute-intensive research environments and leverage LLM-based tools in their development workflows.
Responsibilities:
- Develop and maintain C++ software components for scientific and computational workloads, with a focus on correctness and maintainability
- Integrate and leverage LLM-based tools (e.g., GitHub Copilot, Claude, GPT-4, or similar) throughout the development lifecycle — from code generation and review to documentation and debugging
- Build and experiment with LLM-assisted pipelines for automating repetitive engineering tasks, code scaffolding, and developer productivity tooling
- Collaborate with senior engineers and researchers to prototype and evaluate LLM-integrated solutions to complex software problems
- Contribute to testing, CI/CD workflows, and code documentation — using AI tools to maintain quality and delivery speed
- Stay current with the fast-moving LLM tooling ecosystem and proactively share learnings with the team
Requirements:
- 1–3 years of professional experience (or equivalent academic/research experience) in software development
- Working knowledge of C++, including comfort reading, debugging, and contributing to C++ codebases
- Demonstrated experience using LLM-based development tools (e.g., Copilot, Cursor, Claude, ChatGPT, or similar) as part of a regular development workflow
- Familiarity with prompt engineering concepts and an understanding of how to effectively direct LLMs for code generation, review, or explanation tasks
- Working knowledge of Linux development environments
- Basic experience with Python for scripting, tooling, or working with LLM APIs (e.g., OpenAI, Anthropic)
- Strong problem-solving skills and eagerness to learn within a fast-evolving technical landscape
- Experience building or integrating LLM-powered features into applications or developer tooling (e.g., RAG pipelines, tool-calling agents, code-aware assistants)
- Familiarity with LLM APIs (OpenAI, Anthropic, Hugging Face) or open-source models (LLaMA, Mistral, etc.)
- Understanding of modern C++ (17/20) idioms and experience with performance-oriented development
- Exposure to scientific computing, numerical methods, or high-performance computing concepts
- Experience with parallel computing concepts (multithreading, vectorization, OpenMP)
- Familiarity with modern build systems (CMake, Bazel) and container technologies (Docker)
- Interest in evaluating and benchmarking LLM output quality for engineering tasks