Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise.
Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability.
Create, update, and optimize data connections, datasets, and transformations to align with business needs.
Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems.
Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use.
Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications.
Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale.
Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides.
Requirements
Bachelor’s Degree and minimum 6 years prior Palantir experience or Graduate Degree and a minimum of 4 years of prior Palantir experience
In lieu of degree, minimum 10 years of prior Palantir experience.
4+ years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python.
4+ years of experience in Data Integration.
4+ years experience with design, development of Data Pipelines in Palantir Foundry Pipeline Builder or Code Repo, PySpark and Spark SQL, and data build/sync schedule deployment in Palantir