Northrop Grumman is a leading aerospace and defense technology company, seeking a Sr. Principal Data Scientist - Machine Learning Engineer to join their Insights & Intelligence organization. The role involves building production-grade analytics solutions and collaborating with stakeholders to develop applications that provide data insights for better decision-making.
Responsibilities:
- Work directly with stakeholders (engineers, program managers, subject matter experts) to scope problems, identify constraints, and iterate on technical solutions
- Bridge analytics and infrastructure by understanding both the business problem and the approach, then building systems that deliver insights
- Build user‑friendly, production‑grade ML/AI applications (e.g., Streamlit, Gradio) that provide data insights to teams across the enterprise and enable better decision making
- Develop and maintain cloud‑based infrastructure (AWS, Databricks) and tooling to support scalable and reliable data analytics workflows
- Design and implement CI/CD pipelines, infrastructure‑as‑code (Terraform, AWS CloudFormation), and MLOps practices that enhance team productivity
- Optimize existing workflows and advocate for software engineering best practices (version control, modular design, testing) to drive team efficiency and code quality
- Stay current on cloud technologies, MLOps trends, and application frameworks to identify opportunities for improvement
Requirements:
- Must have a PhD with 4 years of relevant professional experience OR a Master's degree with 6 years of relevant professional experience OR Bachelor's degree with 8 years of relevant professional experience
- Must have strong proficiency with Python, SQL, and Git
- Must have experience with frameworks for rapid application development (e.g., Streamlit, Gradio, Starlette, Next.js)
- Must have knowledge of DevOps or MLOps concepts and their application in data science workflows
- Must have strong understanding of containerization (e.g., Docker, Podman)
- Must have the ability to work collaboratively with data teams (data scientists, analysts) to support analytics workflows and insights
- Must have demonstrated problem-solving and critical-thinking skills with an ability to handle complex technical challenges
- Must have excellent communication skills and comfort engaging with non-technical stakeholders
- Proven track record of deploying and monitoring production-grade software systems on AWS
- Experience with Databricks and PySpark for data transformation and analytics
- Proven experience building and deploying web-based visualization or decision-support tools for business use cases
- Exposure to workflow orchestration tools (e.g., AWS Step Functions, Apache Airflow) and infrastructure-as-code tools (e.g., Terraform, AWS CloudFormation)
- Familiarity with scalable data architectures and machine learning deployment frameworks
- Domain experience in program management, business management, operations research, earned value management, or financial forecasting
- Background in consulting, forward-deployed engineering, or other client-facing technical roles where you translated ambiguous business problems into technical solutions