TaskFlow is a B2B project management SaaS company with 2-week engineering sprints. Over the last 6 sprints, leadership has seen more missed sprint commitments and slower feature delivery, and the VP of Engineering wants a metric framework that surfaces problems before the sprint ends.
Recent sprint data shows planned story points averaged 82 per sprint, but completed story points fell from 78 to 61. Spillover work increased from 9% to 24% of planned points. Mid-sprint scope added after day 3 rose from 6 to 18 points per sprint. Bug tickets created during the sprint increased from 11 to 19, and average cycle time for completed tickets moved from 2.8 days to 4.1 days. Team capacity was mostly stable at 7-8 engineers, with one sprint affected by 2 days of production incident work.
The engineering manager asks which sprint metrics should be treated as leading indicators, how to define them precisely, and how to use them to detect execution problems by day 3-5 rather than during sprint review.
jira_issues: issue_id, sprint_id, issue_type, story_points, status, created_at, started_at, completed_at, assigneesprint_metadata: sprint_id, start_date, end_date, committed_points, team_size, planned_capacity_daysscope_changes: sprint_id, issue_id, added_at, removed_at, story_pointsincident_log: incident_id, date, engineer_hours_consumed, severitybug_tickets: bug_id, created_at, linked_sprint_id, severity, resolved_at