ICF is a global advisory and technology services provider, and they are seeking a Senior Software Engineer to join their team. The role involves developing and maintaining backend APIs for a significant program with the Centers for Medicare & Medicaid Services (CMS), requiring expertise in software engineering within a consulting environment.
Responsibilities:
- Backend API Development and Maintenance
- Data Storage and ETL Engineering
- Working in an AWS Cloud base environment
- Unit Test Writing
- Working with a Scrum Team using Agile
- Writing documentation in Confluence and using JIRA for User Stories
- Performance Testing
- Participate in all team meetings
Requirements:
- Bachelor's Degree
- 5+ years of professional software development experience
- Candidate must be able to obtain and maintain a Federal Public Trust
- Candidate must reside in the U.S., be authorized to work in the U.S., and all work must be performed in the U.S
- Candidate must have lived in the U.S. for three (3) full years out of the last five (5) years
- TypeScript / JavaScript — backend services, async patterns, Node.js runtime
- Python 3 — data engineering, ETL pipelines, type hints, abstract base classes
- SQL — analytical queries, schema design, query optimization across PostgreSQL and MySQL
- NestJS Framework — modules, controllers, services, dependency injection, guards, middleware, decorators
- RESTful API design — resource modeling, HTTP semantics, versioning
- ETL pipeline design — extract → transform → validate → publish lifecycle; idempotency patterns; runtime business rule validation
- S3 — file storage, S3A filesystem integration with Spark, lifecycle conventions
- Docker — multi-stage Dockerfiles, docker-compose for local dev clusters, environment parity with production runtimes
- ORM proficiency — TypeORM (entity modeling, migrations, query builder, transactions)
- Authentication & authorization — JWT/Bearer tokens and policy-based authz with role/claim evaluation
- Apache Spark (PySpark) — distributed compute, DataFrame I/O, Spark SQL, EMR Serverless job configuration and submission
- Pandas / NumPy — in-process data transformation, vectorized operations, statistical aggregations
- Vitest and Jest — unit and integration testing, high coverage discipline (95%+ thresholds)
- pytest — Python unit and integration testing; mocking AWS services
- Structured logging — contextual request/job logging
- APM tooling — Datadog familiarity a plus
- Dependency security — Snyk, CVE remediation, automated dependency updates (Dependabot)
- PostgreSQL — schema design, JDBC integration, query optimization
- MySQL / Aurora MySQL — schema design, indexing, migrations
- Amazon Redshift — analytical SQL, serverless cluster connectivity, credential management
- AWS CodeBuild — CI/CD pipeline authoring, multi-step buildspecs, secret injection
- EMR Serverless — PySpark job submission, monitoring, custom Python
- SSM Parameter Store — runtime secret and config injection
- TypeScript linting — ESLint 9, TypeScript ESLint, Prettier, Husky + lint-staged pre-commit hooks
- Python linting — Ruff (lint + format), isort, pip-compile for deterministic dependency pinning
- Good leadership and team-working skills
- Highly effective analytical, problem-solving, and decision-making capabilities
- Excellent communication and interpersonal skills to interface effectively at all levels of the business
- Organized, detailed oriented and able to prioritize and multi-task
- Ability to self-organize, prioritize and conduct work on multiple projects under tight deadlines in a fast-paced environment
- Federal Government contracting work experience
- Prior experience in consulting or healthcare highly preferred