Lockheed Martin is partnering with PG&E, Salesforce, and Wells Fargo to deliver EMBERPOINT™, an initiative designed to transform wildfire prevention, detection, and response across the United States. They are seeking an experienced Data Engineer to drive the orchestration of data ingestion, storage, and movement for a scalable cloud-based data platform that enables rapid analysis of operational and sensor data.
Responsibilities:
- Design, build, and operate the data pipelines that ingest, transform, catalog, and serve EMBERPOINT data platform and its AI/ML, Unified HMI, and command‑and‑control (C2) applications
- Work closely with the AWS Infrastructure Architect, AI/ML Engineers, Software Factory, MBSE team (Cameo → DOORS NEXT), and the Advisory Board to ensure data quality, security, and performance across the end‑to‑end mission workflow (Detection → Prediction → Response → Recovery)
- Integrate databases with streaming ingestion pipelines and data lake services to enable real-time analytics, operational applications, and predictive wildfire intelligence
- Work closely with data engineering, platform engineering, and application development teams to ensure the platform delivers high availability, strong performance, and secure data management
Requirements:
- 3 + years professional data‑engineering experience
- SQL and Oracle experience
- ≥1 years designing and operating AWS‑native data lakes for mission‑critical or high‑volume workloads
- B.S. Computer Science, Data Science, Information Systems, or related field (M.S. preferred)
- AWS knowledge
- Proven experience in SAFe/Agile environments, working with cross‑functional teams (AI/ML, UX, Systems Engineering)
- Deep knowledge of S3, Glue, Lake Formation, Kinesis, MSK, Redshift, Athena, QuickSight, Lambda, Step Functions, IAM, KMS
- Hands‑on with AWS Glue, Spark, DBT, Airflow (or Managed Workflows for Apache Airflow)
- Proficient in Python (PySpark, Boto3), Scala, SQL, and Shell/Bash
- Experience integrating MBSE data from Cameo → DOORS NEXT into data‑lake pipelines
- Knowledge of AI/ML data pipelines (SageMaker Feature Store, model‑artifact versioning)
- Experience with data ingestion, Object Storage and data paths, ingest orchestration, ETL, relational and non-relational database with autonomousDB