NovaCloud is a B2B SaaS company with 45 engineers across 6 product squads. Over the last two quarters, customer-reported bugs increased from 28 to 46 per month, average feature delivery time rose from 12 to 19 days, and platform uptime fell from 99.95% to 99.82%. The CTO wants a clear metric framework to evaluate engineering team performance without encouraging teams to optimize for speed at the expense of quality.
Engineering managers currently report different metrics: some focus on story points completed, others on deployment count or incident volume. Leadership wants a standardized dashboard for quarterly reviews and weekly operating meetings. You need to define which metrics should be used, how they should be calculated, and how to interpret trade-offs between delivery velocity, reliability, and engineering quality.
Last quarter, Team Atlas shipped 38 changes with a median cycle time of 9 days and 6 Sev-2 incidents. Team Beacon shipped 24 changes with a median cycle time of 16 days and 1 Sev-2 incident. Team Comet shipped 31 changes with a median cycle time of 11 days, but rollback rate increased from 4% to 11% after a release process change.