Design and develop ETL pipelines using Oracle Integration Cloud (OIC), Informatica, Talend, or equivalent middleware tools to transform extracted legacy HR data into Oracle Fusion HCM HDL and HSDL load file formats
Build data transformation rules that map legacy data values (codes, descriptions, hierarchies) to Oracle Fusion HCM reference data, value sets, and lookup values for each agency migration
Develop data cleansing pipelines that identify and remediate data quality issues including missing required fields, invalid date formats, duplicate records, and referential integrity violations before target system loading
Design and implement batch processing frameworks capable of handling large-volume agency migrations ranging from small to very large agency employee populations with configurable parallelism and restart/recovery capabilities
Build comprehensive error handling and exception management within ETL pipelines including error categorization, automated retry logic, and error reporting dashboards for migration teams
Develop data reconciliation processes that compare source record counts, key data element values, and business rule validations between legacy extracts, transformation outputs, and target system loaded records
Create and maintain data quality dashboards providing real-time visibility into migration pipeline status, error rates, data completeness metrics, and reconciliation results for each agency migration wave
Design reusable transformation templates and parameterized mapping configurations that accelerate successive agency migrations by leveraging patterns from completed migrations
Collaborate with legacy system export developers (PeopleSoft/EBS team) to define extraction file formats, handoff procedures, and data interface specifications
Support iterative mock migration cycles (typically 3-4 per agency) and final cutover data loads, optimizing pipeline performance and resolving data quality issues identified in each cycle
Document ETL design specifications, transformation rules, data flow diagrams, and operational runbooks for each agency migration in accordance with SAFe Agile artifact standards
Apply established methods, standards, and practices to independently resolve functional and technical issues of moderate scope, contributing to team knowledge bases and consulting with senior staff on complex or unfamiliar problems as they arise
Communicate effectively with project team members and direct stakeholders to report progress, explain technical approaches, and support collaborative problem-solving within assigned workstreams and Agile Release Train ceremonies
Requirements
Bachelor's degree in Computer Science, Information Technology, Software Engineering, or related field
4+ years of experience in ETL development, middleware development, or data integration for enterprise system implementations
3+ years of hands-on experience with ETL/middleware tools such as Oracle Integration Cloud (OIC), Informatica PowerCenter/Cloud, Talend, or equivalent data integration platforms
Strong SQL proficiency with experience writing complex transformation queries, data validation scripts, and reconciliation procedures
Experience with batch data processing for large-volume datasets (100,000+ records) including performance optimization, error handling, and restart/recovery design patterns
Knowledge of data quality management principles including data profiling, cleansing, standardization, and validation
Experience with file-based data integration formats (CSV, XML, JSON) and enterprise data loader specifications
Familiarity with Agile/SAFe development methodologies and iterative delivery practices
Must be able to obtain and maintain a Public Trust clearance (US Citizenship required)