You’re the analytics partner for the Rider App engineering org at Uber. The org ships features for pricing, ETA, and checkout across iOS/Android and a shared backend. The org is ~120 engineers across 10 squads, releasing to production multiple times per day. Reliability is business-critical: checkout regressions can directly impact conversion and driver supply.
Over the last 8 weeks, leadership rolled out two changes: (1) a new code review policy (2 reviewers required for most repos) and (2) a shift from Kanban-like flow to 2-week sprints with a renewed focus on “commitment.”
At the QBR, the VP Eng flags a worrying pattern:
Product leaders are confused: “If velocity is up, why are we slower and buggier?” Engineering leaders suspect the metrics are being gamed or are measuring different things.
You have one week to deliver a metrics readout and a plan that can be operationalized in the next sprint planning cycle.
| Source | What it contains | Grain |
|---|---|---|
jira_issues | issue_id, type (story/bug/chore), story_points, created_at, status_change timestamps, squad_id, component | per issue |
git_prs | pr_id, issue_id, opened_at, first_review_at, approved_at, merged_at, lines_added/deleted, reviewers_count | per PR |
deployments | deploy_id, service, commit_sha, deployed_at, rollback_flag, deploy_type | per deploy |
incidents | incident_id, start_at, severity, root_cause_component, linked_pr_id/commit_sha, customer_impact | per incident |
support_contacts | contact_id, created_at, tag, platform, app_version | per contact |
Constraints: