RideNow uses a gradient-boosted regression model to predict trip-level dynamic price multipliers and a companion calibration layer to bucket rider risk into 1-5 stars for internal quality review. The model went live 8 weeks ago across 12 U.S. cities. Since launch, finance has reported margin compression in some cities, while operations has seen rider complaints increase in others.
| Metric | Validation Before Launch | Last 14 Days in Production | Change |
|---|---|---|---|
| RMSE on realized trip revenue | 0.18 | 0.31 | +72.2% |
| MAE on realized trip revenue | 0.11 | 0.19 | +72.7% |
| Calibration error (predicted vs actual multiplier) | 0.03 | 0.09 | +0.06 |
| % trips underpriced by >10% | 6.4% | 14.8% | +8.4 pts |
| % trips overpriced by >10% | 5.9% | 11.2% | +5.3 pts |
| Avg gross margin per trip | $3.42 | $2.91 | -14.9% |
| Rider conversion rate | 78.6% | 74.1% | -4.5 pts |
| Driver acceptance rate | 84.3% | 80.5% | -3.8 pts |
Leadership wants a post-launch monitoring plan that can detect whether the model is drifting, miscalibrated, or failing in specific segments such as airport trips, peak commute hours, and bad-weather demand spikes. You need to define what should be monitored daily and weekly, how to interpret the current metrics, and what actions should trigger retraining, threshold changes, or rollback.