Proactively explore and implement AI tools, LLM integrations, and MCP (Model Context Protocol) to reduce routine database toil, optimize query performance, and accelerate incident resolution.
Support our data warehouse ecosystem by optimizing Snowflake performance, including application packaging and testing.
Own the deep-level optimization of MSSQL (crucial for on-call stability) and PostgreSQL at the server, database, and query levels.
Forecast resource utilization across platforms. Identify cost-saving opportunities, optimize Snowflake credit usage, and right-size AWS infrastructure.
Automate all data infrastructure using Terraform, AWS, Docker, and Kubernetes. You will manage containerized data services and stateful workloads.
Manage and optimize deployment pipelines using GitLab and Octopus Deploy, ensuring safe, repeatable database schema changes.
Create technical documentation, including runbooks, "how-to" guides for developer self-service, and clear architectural diagrams.
Serve as the subject matter expert for SQL Server, Postgres, and Snowflake in a 24/7/365 on-call rotation.
Requirements
8+ years of experience maintaining and optimizing high-traffic production databases, specifically Microsoft SQL Server, PostgreSQL
Expert knowledge in Snowflake
Professional experience with Terraform, AWS, GitLab, and Octopus Deploy
Expert-level knowledge of Docker and production experience running stateful applications within Kubernetes (K8s)
Knowledge of MCP (Model Context Protocol) and experience using LLMs to assist in coding or system automation
Expert-level Python and PowerShell skills. Familiarity with C# and object-oriented programming (OOP).
Expert in tuning queries generated from ORMs (Entity Framework, Dapper, etc.)
Hands-on experience with Snowflake (packaging/testing) and DynamoDB
Familiarity with OpenSearch, Redis and DynamoDB.
Education: B.S./M.S. in Computer Science, Information Systems, or equivalent direct work experience.