Streamly, a subscription video platform, uses a binary classifier to predict which paid users are likely to churn in the next 30 days so the growth team can send retention offers. The model scores all active subscribers weekly, but leadership is unsure whether the model is actually useful because campaign costs have risen while retained revenue has not improved as expected.
| Metric | Validation Set | Last 8-Week Production Campaign |
|---|---|---|
| AUC-ROC | 0.84 | 0.81 |
| Precision @ top 10% scored users | 0.41 | 0.36 |
| Recall @ top 10% scored users | 0.27 | 0.24 |
| F1 @ current threshold | 0.31 | 0.29 |
| Lift @ top decile | 3.4x | 3.0x |
| Brier score | 0.118 | 0.146 |
| Avg predicted churn rate | 18.5% | 19.2% |
| Actual churn rate | 12.1% | 13.8% |
| Weekly users targeted | 120,000 | 120,000 |
| Offer acceptance rate | - | 14.0% |
| Incremental retained users vs control | - | 3,100 / week |
| Offer cost per targeted user | - | $2.40 |
| Avg monthly gross margin per retained user | - | $18 |
The growth team wants to know whether this model is good enough to drive retention spend, whether the threshold is wrong, or whether the model is poorly calibrated for decision-making.