Leidos is a company focused on digital modernization, and they are seeking a highly motivated Data Integration Engineer to support the design, integration, and operationalization of enterprise data products. The role involves collaborating with data owners and technical staff to create scalable data solutions and modernize data structures within cloud-native platforms.
Responsibilities:
- Design and implement data integration solutions across enterprise systems and cloud data platforms (e.g., Oracle, Snowflake, AWS, Azure)
- Extend existing physical data models into logical and semantic data models that support analytics and AI use cases
- Partner with data owners and CDAO technical staff to define, design, and refine enterprise data products, including domains, schemas, interfaces, SLAs, and consumption patterns
- Collaborate with the team to translate enterprise architecture standards and data governance guidelines into implementable models (logical, physical, domain) and integration patterns
- Work closely with Data Engineers to ensure pipelines are aligned to target logical, physical, domain, and semantic models
- Develop, implement, and maintain dimensional, relational, and domain-driven data product models and databases using the IDERA /Embarcadero suite of products. Ensure assets are optimized for performance, scalability, and AI-readiness within scalable cloud-native data platforms
- Collaborate with data owners and CDAO technical staff to develop and maintain Leidos data protection and data privacy policies governing data use
- Collaborate with CDAO technical staff to develop and maintain the IDERA / Embarcadero repository and portal data objects
- Support metadata registration and governance alignment within Collibra
- Implement data integration patterns including batch, streaming, API-based, and event-driven architectures
- Participate in data quality and validation processes to ensure trusted, production-ready data products
- Contribute to documentation, standards, and modeling best practices
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or related field and 8+ years of relevant experience
- Strong experience with data modeling (conceptual, logical, semantic, and physical) and using the IDERA /Embarcadero or similar products
- Hands-on experience on assembling and implementing DDL for Tables, Views, SQL frameworks, and Security policies within relational database systems
- Understanding of Data Replication products, preferably Oracle's Golden Gate replication
- Hands-on experience with cloud data platforms such as Snowflake, AWS, Azure, or GCP
- Hands-on experience working with ETL/ELT/API developers on the design and implementation of data integration pipelines; preferably experience with Informatica
- Proficiency in SQL and understanding of performance optimization techniques
- Experience working with Data Architects and cross-functional technical teams
- Strong analytical and problem-solving skills
- US Citizenship is required
- Experience designing semantic data models to support AI, ML, and advanced analytics use cases
- Experience contributing to data product design within a modern data platform architecture
- Familiarity with medallion architecture, data mesh, or domain-oriented data product strategies
- Experience working with Snowflake-native capabilities (e.g., streams, tasks, Snowpark, dbt)
- Familiarity with metadata management and governance tools such as Collibra
- Exposure to RAG pipelines or AI-driven data consumption patterns
- Experience working in regulated or government environments
- Knowledge of Python or Spark for data transformation
- Experience implementing CI/CD practices for data pipelines with Terraform and GitLab
- Working knowledge of IDERA/Embarcadero and Collibra products and their integration