Kandu, Inc. is pioneering an integrated approach to stroke recovery by combining FDA-cleared brain-computer interface technology with personalized telehealth services. The Lead Data & Analytics Engineer will be responsible for building Kandu’s unified digital operating system, architecting data infrastructure, and enabling data-driven workflows across the enterprise.
Responsibilities:
- Build and maintain data pipelines from key business systems, including EHR, CRM, ERP, claims/finance, and manufacturing/ops
- Design ELT workflows to ingest, normalize, and transform data using modern tooling (e.g., Fivetran/Airbyte, Prefect/Airflow, dbt, Python)
- Ensure reliability, observability, version control, and documentation of all pipelines
- Handle PHI responsibly and ensure all pipelines, transformations, and data storage follow HIPAA, GDPR, and SOC2 requirements
- Architect, build, and implement a centralized data warehouse (e.g., in Snowflake, BigQuery, or Redshift), including schemas, fact/dimension models, and semantic layers that reflect real-world workflows
- Develop and maintain clean data models (dimensional, star/snowflake schemas), ensuring consistency across patient, clinical, sales, and claims datasets, to support analytics, dashboards, reporting and predictive models
- Ensure data quality, lineage, governance and consistency across domains
- Develop dashboards, operational monitors, and recurring analytics that provide real-time visibility into clinical, commercial, market access, revenue cycle, finance, and manufacturing activities
- Automate recurring reporting workflows, alerts, and ETL orchestration (scheduling, monitoring, error handling)
- Enable self-service analytics by ensuring data models are consistent, intuitive, and well-documented
- Translate problems uncovered during discovery into actionable, well-structured analytics and visualizations
- Support rapid ad hoc analysis during the early discovery phase of ESI
- Develop early predictive and prescriptive models, e.g. prior-auth approval likelihood, therapy adherence patterns, claims-denial risk, operational forecasting
- Support feature engineering, EDA and model evaluation under the direction of the VP of ESI
- Help lay the foundation for future AI-driven decision support systems and embedding intelligence into operational systems
- Work closely with business stakeholders to understand requirements, pain points and data needs
- Communicate clearly with non-technical teams and translate technical concepts into business language
- Support the ESI team’s mission to strengthen data-driven decision-making across the company
Requirements:
- 5–8 years of experience in data engineering, analytics engineering, or a full-stack data role spanning pipelines, modeling, and dashboards
- Strong SQL and Python skills (must be confident building production-grade pipelines)
- Experience with modern data stack tools (Airflow/Prefect, dbt, Fivetran/Airbyte, Snowflake/BigQuery/Redshift)
- Experience designing star schemas, dimensional models, or semantic layers
- Strong dashboarding experience (Looker, Tableau, PowerBI, Mode, Sigma, etc.)
- Ability to work across messy, unstructured and inconsistent SaaS data sources
- Comfort integrating data from multiple SaaS systems via APIs, webhooks, and vendor interfaces
- Excellent communication skills and comfort partnering with cross-functional teams
- Ability to thrive in ambiguity and build v1 systems with limited infrastructure
- Employment eligibility to work in the United States is required. The company does not sponsor employment visas for this position at this time
- Experience in healthcare (EHR data, claims, RCM, prior auth, ERP, medical device workflows)
- Familiarity with HIPAA/security considerations for pipelines and stored data
- Experience contributing to early-stage predictive modeling (sklearn, XGBoost, time series)
- Experience improving operational workflows or enabling digital transformation
- Knowledge of Salesforce data model, clinical systems, and ERP/RCM systems is a plus