Understand the data needs of different engineering teams at NextSense, including HWE, Clinical and Research
Understand end to end data interactions and dependencies across our existing data pipelines and data transformation and how they impact business decisions.
Design best practices for big data processing, data modeling and warehouse development for NextSense.
Build integration and automation across different systems to scale data operations and increase efficiency.
Requirements
3+ years in building a data warehouse and data pipelines.
Or, 5+ years in data intensive engineering roles.
Solid understanding of databases (relational, key/value, document, columnar, olap, graph)
Strong SWE principles and OO experience
Experience with GCP, Kubernetes
Experience building and running in production (24x7 environments) distributed large-scale systems
Experience in working on Cloud Distributed Storage/Databases and/or data technologies that power analytics (e.g., Pinot, Druid, Redshift, Hadoop, Spark, Presto, Kafka, Flink, etc. or similar technologies)
Experience in Building Micro services & Cloud Platforms on AWS, Azure etc
Experience with open source project management and governance
Hands on experience developing distributed systems, databases, or other large scale data systems.
Tech Stack
Amazon Redshift
AWS
Azure
Cloud
Distributed Systems
Google Cloud Platform
Hadoop
Kafka
Kubernetes
Open Source
Spark
Benefits
Flexible/hybrid work schedule (built-in work from home days)
Equity
Retirement savings (no employer matching at this stage)