L3Harris Technologies is dedicated to recruiting and developing high-performing talent in the defense tech sector. The Data Engineer will manage enterprise-level data life cycle processes, oversee data pipelines, and ensure data quality to support decision-making across the organization.
Responsibilities:
- Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise
- Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability
- Create, update, and optimize data connections, datasets, and transformations to align with business needs
- Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems
- Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use
- Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications
- Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale
- Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides
Requirements:
- Bachelor's Degree and minimum 9 years prior Palantir experience or Graduate Degree and a minimum of 7 years of prior Palantir experience In lieu of degree, minimum 13 years of prior Palantir experience
- Minimum of 4 years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python
- Minimum of 4 years of experience in Data Integration
- Experience with designing and developing data pipelines in PySpark, Spark SQL, SQL or Code Build
- Experience in building and deploying data synchronization schedules and maintaining data pipelines using Palantir Foundry
- Strong understanding of Business Intelligence (BI) and Data Warehouse (DW) development methodologies
- Hands-on experience with the Snowflake Cloud Data Platform, including data architecture, query optimization, and performance tuning
- Proficiency in Python, PySpark, Pandas, Databricks, JavaScript, or other scripting languages for data processing and automation
- Experience with other ETL tools such as Azure Data Factory (ADF), SSIS, Informatica, or Talend is highly desirable
- Familiarity with connecting and extracting data from various ERP applications, including Oracle EBS, SAP ECC/S4, Deltek Costpoint, and more
- Experience with AI tools such as OpenAI, Palantir AIP, Snowflake Cortex or similar