Asana runs product development through 8 cross-functional squads, each owning a roadmap area such as onboarding, collaboration, and enterprise admin. After two quarters of mixed delivery results, the Head of Product Operations wants a consistent framework to measure both team performance and project success without rewarding output over outcomes.
In Q2, the onboarding squad shipped 11 roadmap items, up from 7 in Q1, and improved average sprint velocity from 42 to 50 story points. However, only 2 of the 11 launches met their 30-day success criteria. Activation improved from 61% to 64%, but Day-30 retention stayed flat at 38%, and support tickets per 1,000 active users rose from 14 to 19. Another squad shipped fewer items (6) but drove a 4.5-point increase in weekly active teams and reduced bug reopen rate from 12% to 6%.
Leadership is asking which metrics should be used across squads, how to separate leading vs lagging indicators, and how to avoid gaming. They want one scorecard for quarterly reviews and project postmortems.
jira_issues: ticket status, story points, sprint dates, assignee, squadlaunch_reviews: project goal, target KPI, launch date, 30/60/90-day outcomeproduct_usage_events: activation, feature adoption, WAU, retention eventssupport_tickets: ticket volume, severity, linked feature areaincident_log: outages, bugs, reopen rate, MTTRemployee_pulse_survey: team health, burnout risk, role clarity