1Password is a rapidly growing company focused on building a safe and productive digital future. They are seeking a Principal Engineer to define the technical direction for AI-powered security capabilities, translating complex problems into production systems while ensuring privacy and reliability.
Responsibilities:
- You will set the rules for where model-based reasoning is appropriate, where deterministic systems must take over, and how those layers interact safely in production
- You will define the standards for signal collection, aggregation, retention, and use. That includes making clear decisions about what happens locally, what happens remotely, and what should not be collected at all
- You will design the control model for agentic and tool-using systems in product contexts, including permission boundaries, approval paths, audit logs, rollback behavior, and safe fallback under uncertainty
- You will establish the platform primitives that let teams build AI features consistently and safely, including interfaces for model invocation, policy evaluation, tool execution, experimentation, telemetry, and evaluation
- You will make key architecture calls on where 1Password should build differentiated capability, where it should integrate with adjacent systems, and where it should stay opinionated about scope
- You will work with Security, Privacy, Legal, and Compliance early in the design process so that trust, regulatory, and data-handling requirements are built into the architecture from the start
- You will raise the operational bar for AI in production by defining service expectations, evaluation frameworks, rollout controls, rollback paths, abuse testing, and incident playbooks
- You will lead through technical judgment and influence across teams. This role is expected to shape direction well beyond any single codebase
Requirements:
- You have built and operated production systems where trust, correctness, and failure handling matter
- You have deep experience with AI or ML systems in production, ideally including LLM-integrated systems with explicit guardrails, evaluation pipelines, auditable tool use, and clear limits on autonomous behavior
- You have strong instincts for privacy-preserving design, including client-side or edge inference, constrained telemetry, explainable system behavior, and predictable user-facing outcomes
- You have meaningful depth in security-sensitive domains such as identity, authentication, authorization, secrets management, enterprise controls, endpoint trust, or adversarial detection
- You have a track record of setting technical direction across multiple teams, making durable architecture decisions, and driving delivery through complex dependencies
- You communicate clearly with engineers, executives, and cross-functional partners. You can explain hard tradeoffs plainly and make good judgment legible
- Experience with endpoint or device posture signals, browser or extension security, phishing detection, or enterprise device trust
- Experience with SaaS administration, identity lifecycle workflows, or enterprise policy automation
- Experience building evaluation, red-team, or abuse-testing frameworks for AI-assisted systems