NetDocuments is the world’s #1 trusted cloud-based content management and productivity platform that helps legal professionals do their best work. The Staff Database Engineer will provide enterprise-level technical leadership across the global data platform, define technical strategy, and drive initiatives for data systems serving a global customer base.
Responsibilities:
- Own the technical roadmap for data infrastructure across AWS and Azure multi-cloud environments
- Drive architectural decisions for multi-region, cross-account data platforms supporting global operations
- Establish engineering standards and best practices for data pipelines, database design, reliability, and observability
- Lead incident post-mortems and drive systemic improvements that prevent entire classes of failures
- Influence product and engineering roadmaps by identifying technical risks, constraints, and opportunities
- Define capacity planning and cost optimization strategies to ensure performance, reliability, and financial efficiency at scale
- Architect greenfield systems for large-scale data migrations (e.g., Couchbase to DocumentDB or DynamoDB at TB+ scale)
- Design fault-tolerant, self-healing data pipelines using AWS Glue, Step Functions, Lambda, ECS, and Azure Data Factory
- Define data modeling strategies across Snowflake, PostgreSQL, DynamoDB, and DocumentDB in multi-region environments
- Design observability into all systems, including monitoring, alerting, tracing, and debugging capabilities
- Own backup and disaster recovery strategies, including RTO/RPO planning, PITR, and cross-region replication
- Drive indexing, query optimization, and performance tuning for databases handling billions of documents
- Architect security patterns covering IAM, secrets management, encryption at rest and in transit, and access controls
- Partner with the Director of Data Engineering on strategic planning, prioritization, and roadmap execution
- Collaborate with senior leaders across Engineering, Product, and Analytics on enterprise technical strategy
- Work across organizational boundaries to deliver initiatives requiring coordination across multiple teams and time zones
- Translate business requirements into technical designs and clearly communicate trade-offs to non-technical stakeholders
- Represent the data organization in architecture reviews and cross-functional technical planning forums
- Lead by example through hands-on development of critical systems, migrations, and infrastructure
- Own and resolve complex production incidents affecting large-scale, distributed systems
- Drive cloud-to-cloud, legacy-to-modern, and cross-region data migrations
- Implement streaming and CDC solutions using Kafka, Kinesis, AWS DMS, and related technologies
- Build automated data quality, validation, and testing frameworks
- Optimize performance, reliability, and cost across critical workloads
- Automate operational workflows using Python, Infrastructure as Code, and cloud-native tooling
- Mentor Senior and Data Engineers through code reviews, design reviews, and technical guidance
- Raise the technical bar through documentation, standards, pairing, and knowledge sharing
- Help engineers grow from mid-level to senior and senior to staff through intentional coaching
- Promote a culture of operational excellence, including incident response, runbooks, and blameless post-mortems
Requirements:
- 7–10+ years of experience as a Data Engineer, Database Engineer, Platform Engineer, or similar role
- Proven track record designing and owning large-scale, production data systems (TB+ scale)
- Deep expertise in Python and SQL for data operations and workflow orchestration
- Expert-level experience with AWS services including Glue, Lambda, Step Functions, ECS, EC2, DynamoDB, and DocumentDB
- Production experience with Azure data services such as Data Factory, Synapse, and Functions
- Strong database expertise across relational, NoSQL, and analytical systems (Snowflake, PostgreSQL, DynamoDB, DocumentDB)
- Demonstrated experience leading major data migrations across clouds, regions, or legacy systems
- Deep understanding of distributed systems concepts including replication, consistency, and partitioning
- Advanced observability and monitoring experience using tools such as CloudWatch, Datadog, or Azure Monitor
- Strong incident response experience, including leading production outages through resolution and post-mortem
- Proven ability to drive technical initiatives across multiple teams through influence rather than authority
- Excellent communication skills with the ability to explain complex technical concepts to engineers and executives alike
- Experience with Couchbase or similar distributed document databases
- Experience operating large-scale, multi-tenant systems serving hundreds of customers
- Background in document management, content platforms, or hierarchical storage systems
- Experience with Infrastructure as Code (Terraform, CloudFormation, Bicep)
- CI/CD expertise for data engineering workflows
- Experience with DBT, Coalesce, or similar data transformation frameworks
- Familiarity with AI/LLM integration patterns, RAG architectures, or vector databases
- Proven cloud cost optimization impact with measurable results
- Technical writing, conference speaking, or open-source contributions