Peraton is a next-generation national security company that drives missions of consequence. They are seeking a skilled Senior Databricks Data Engineer / Architect to support modernization, data, and cloud initiatives by designing and building scalable Databricks Lakehouse solutions and optimizing data pipelines.
Responsibilities:
- Architect and design Development, Test, Pre-production, Production, and Disaster Recovery (DR) environments in AWS using Databricks
- Install, configure, and troubleshoot Databricks PVC deployments across all environments, and provide documentation for future installations
- Develop and optimize ETL/ELT pipelines using SQL, Python, and PySpark
- Ingest, transform, and integrate data from diverse enterprise sources
- Define data models, data quality rules, and observability standards
- Lead performance tuning, cost optimization, and reliability enhancements for Databricks workloads
- Collaborate with architects, product owners, and business stakeholders to deliver data solutions
- Provide guidance on Databricks and cloud best practices
- Deliver peer training and mentorship as needed
- Contribute to defining and implementing the target-state Databricks architecture
- Support the development of the Data Governance framework and operating model for the target state
- Assist Cybersecurity with security assessments, ATO readiness, and related approvals
- Perform advanced Databricks performance optimization (e.g., Z-Ordering, partitioning, bucketing) and monitor system performance
- Support self-service analytics and BI platforms
- Apply CI/CD and DataOps best practices to data engineering workflows
- Mentor engineers, conduct code reviews, and promote engineering excellence
- Build enterprise-scale data systems enabling analytics and reporting initiatives
- Document data pipelines, system architecture, and technical processes to ensure maintainability and knowledge transfer
Requirements:
- Minimum of 8 years with BS/BA; Minimum of 6 years with MS/MA; Minimum of 3 years with PhD
- Experience in data engineering or a related field
- Hands-on experience with Databricks
- Cloud experience (AWS or Azure)
- Familiarity with CI/CD pipelines for data projects and automated deployment strategies
- Strong proficiency in SQL and Python for data processing and ETL development
- Familiarity with Agile/Scrum methodologies and Jira for project tracking
- Solid understanding of data modeling, warehousing, and performance optimization
- Must be a U.S. Citizen or Lawful Permanent Resident (LPR) with at least three consecutive years of U.S. residency from the date of legal entry as an LPR
- Must be able to obtain and maintain the required MBI clearance
- Experience with Agile/Scrum methodologies
- Enterprise data architecture experience
- ML workflows exposure
- Databricks / Cloud certifications
- Knowledge of data governance, compliance, and security best practices
- Active MBI Clearance