Develop and maintain data ingestion, transformation (ETL/ELT), and data processing pipelines.
Implement data workflows supporting structured and unstructured data across enterprise environments.
Support integration of data pipelines with enterprise data platforms, APIs, and downstream analytics applications.
Assist in development of data models, schemas, and metadata structures aligned to program standards.
Perform data validation, quality checks, and basic data governance implementation.
Support integration of data pipelines into DevSecOps CI/CD workflows.
Troubleshoot data pipeline issues and assist in performance tuning and optimization.
Collaborate with data architects, data scientists, and software engineers to support end-to-end data solutions.
Participate in SAFe ceremonies including sprint planning, backlog refinement, sprint reviews, and retrospectives.
Document data processes, pipelines, and technical implementations.
Requirements
Active Top Secret (TS) clearance with SCI eligibility.
Bachelor’s degree in Computer Science, Data Science, Engineering, Information Systems, or related technical discipline and 4–8 years of relevant experience OR Master’s degree in a related field and 2–6 years of relevant experience.
Experience developing and maintaining data pipelines using modern programming languages (e.g., Python, Java, or similar).
Experience working with relational and non-relational databases.
Experience performing data ingestion, transformation, and processing tasks.
Experience working with data tools and platforms (e.g., Spark, Kafka, Airflow, or similar).
Experience operating within Agile or SAFe environments supporting enterprise systems.