Aquila Technology is seeking a DevOps Engineer to support the development and operation of enterprise collaboration platforms and cutting-edge Generative AI systems. The role involves managing secure AI infrastructure and delivering web applications and collaboration tools essential for technical missions and daily operations.
Responsibilities:
- Troubleshoot and resolve issues across pipelines, applications, and infrastructure layers
- Version control systems (Git)
- Deploying and maintaining application infrastructure (Generative AI technologies a plus)
- Programming/scripting languages (Python, bash, Perl, Go)
- Excellent Problem Solving
- Strong team communication
- Software Development Life Cycle (SDLC)
- Containerization/orchestration technologies (e.g., Docker, Docker Swarm, Kubernetes)
- Familiarity with monitoring, logging, and observability tools (Prometheus, Grafana and Splunk)
- Agile methodology awareness (Jira, Scrum/Kanban)
- Cloud computing platforms (e.g., AWS, Azure, GCP)
- Hands-on experience with Linux administration (RHEL/Ubunto)
Requirements:
- 5 years Troubleshoot and resolve issues across pipelines, applications, and infrastructure layers
- 5 years Version control systems (Git)
- 3 years Deploying and maintaining application infrastructure (Generative AI technologies a plus)
- 5 years Programming/scripting languages (Python, bash, Perl, Go)
- Excellent Problem Solving
- Strong team communication
- 5 years Software Development Life Cycle (SDLC)
- 5 years Containerization/orchestration technologies (e.g., Docker, Docker Swarm, Kubernetes)
- 5 years Familiarity with monitoring, logging, and observability tools (Prometheus, Grafana and Splunk)
- 5 years Agile methodology awareness (Jira, Scrum/Kanban)
- 3 years Cloud computing platforms (e.g., AWS, Azure, GCP)
- 5 years hands-on experience with Linux administration (RHEL/Ubunto)
- Proven experience in deploying and maintaining application infrastructure, with a particular focus on Generative AI technologies
- Proven hands-on experience with Linux administration, particularly in Red Hat Enterprise Linux (RHEL) distributions
- Proficiency in programming or scripting languages such as Python and Bash
- Strong understanding of cloud computing platforms (e.g., AWS, Azure, GCP) 3 years
- Containerization/orchestration technologies (e.g., Docker, Docker Swarm, Kubernetes)
- Strong understanding of Git version control for code and Docker image management
- Ability to troubleshoot and resolve issues across pipelines, applications, and infrastructure layers to ensure operational stability
- Familiarity with monitoring, logging, and observability tools to enhance service insight and alerting. Experience with tools such as Prometheus, Grafana and Splunk
- Understanding of Software Development Life Cycle(SDLC) and Agile methodologies (e.g., Scrum)
- Strong communication skills and ability to collaborate effectively with cross-functional teams in a hybrid environment
- Excellent problem solving and critical thinking skills
- Adaptability to change
- 5 frameworks or libraries (e.g., Model Context Protocol)
- Bachelor's Degree
- AI Services (AWS Bedrock or Azure AI Foundry)
- Implementing DevSecOps practices (understanding of security and compliance principles)
- Agentic frameworks or libraries (e.g., Model Context Protocol)
- CI/CD pipelines tools (GitHub Actions, Jenkins, Ansible, etc.)
- Knowledge of DevOps tools and pipelines
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field with 5+ years of experience
- Experience with designing, building, and maintaining robust CI/CD pipelines using tools such as GitHub Actions, Jenkins, and Ansible
- Strong understanding of security and compliance principles, previous experience in implementing DevSecOps practices
- Experience with AI Services such as AWS Bedrock or Azure AI Foundry
- Experience with Agentic frameworks or libraries (e.g., Model Context Protocol)