Company Context
You’re the Product Manager for CarePath, a Series D digital health company that sells a clinical decision support (CDS) platform to 120 US health systems. CarePath’s core product is embedded in Epic and Cerner workflows and is used by ~85,000 clinicians weekly. The business model is annual SaaS contracts ($250K–$2M per system) with renewals heavily dependent on demonstrating measurable clinical impact.
CarePath is considering launching a new feature: “Sepsis Watch”, a risk stratification and alerting module intended to identify patients at risk of sepsis earlier and prompt timely interventions. The company’s leadership wants evidence fast, but prospective randomized trials are expensive and slow. Your VP of Product asks you to design a retrospective study using existing clinical data to decide whether to greenlight an MVP build and to support early commercial conversations.
User / Market Scenario
Primary users
- Hospitalists and ED physicians: want early warning without alert fatigue; skeptical of “black box” scores.
- Nurse managers: want clear escalation protocols and workload predictability.
- Quality & Safety teams: measured on SEP-1 compliance, mortality, ICU transfers, and length of stay.
- CMIO / Informatics: care about workflow fit, governance, and liability.
Competitive landscape
- Epic Sepsis Model (ESM): widely deployed, but some systems report poor local calibration and trust issues.
- Bayesian Health / Dascena-style vendors: strong marketing claims; variable integration depth.
- Many hospitals have “homegrown” rules-based alerts that are noisy.
CarePath’s differentiation is deep workflow integration and a reputation for being conservative on safety claims.
Problem / Opportunity
CarePath has access (via customer data use agreements) to de-identified EHR extracts from 18 partner hospitals covering 3.2M inpatient encounters over 4 years. Internal analysis suggests:
- Suspected sepsis cases (proxy definition) occur in ~2.1% of encounters.
- Median time from first abnormal vitals/labs to antibiotics is 3.4 hours.
- Hospitals with faster antibiotics show lower ICU transfers and lower mortality, but causality is unclear.
Leadership’s question: Can we use retrospective data to credibly estimate whether Sepsis Watch would improve clinically meaningful outcomes and be safe enough to deploy?
What you need to deliver (in the interview)
- Define the study objective and “job to be done” for Sepsis Watch (who it serves and what decision it enables).
- Propose a retrospective study design using existing EHR data that could support a go/no-go decision for an MVP.
- Specify inclusion/exclusion criteria, exposure definition, and outcomes (clinical + workflow).
- Identify key threats to validity (confounding, selection bias, missingness, label leakage, site heterogeneity) and how you would mitigate them.
- Recommend what product requirements and guardrails should be set based on the study (e.g., alert thresholds, explainability, opt-out, monitoring).
Constraints
- Timeline: 10 weeks to produce an executive-ready readout; MVP decision immediately after.
- Team: 2 data scientists, 1 data engineer, 0.5 FTE clinical informaticist.
- Data limitations:
- EHR extracts vary by site; some hospitals have incomplete medication administration timestamps.
- Lab panels differ; vitals frequency varies by unit.
- You cannot access free-text notes for 12 of 18 sites (only structured data).
- Regulatory / legal:
- Must stay within HIPAA de-identification and existing DUAs; no patient re-contact.
- Marketing claims must be supportable; avoid implying proven mortality reduction without strong evidence.
- Product constraints:
- Alerting must not exceed +5% increase in overall interruptive alerts per clinician shift.
- Any model must be monitorable for drift and bias across hospitals.
Your interviewer will push you to make trade-offs: speed vs rigor, generalizability vs site-specific performance, and clinical impact vs alert fatigue.