Altana’s engineering organization supports the Altana Atlas platform, including graph data pipelines, customer-facing investigation workflows, and internal ML-enabled supply chain intelligence features. Over the last two quarters, leadership feels delivery has slowed: roadmap commitments for Atlas dropped from 82% on-time completion to 61%, while customer-reported Sev-1/Sev-2 incidents rose from 9 to 17 per quarter.
The VP of Engineering asks you to define a practical metric framework for measuring both success and velocity of the engineering team, without incentivizing low-quality output or vanity metrics like raw story points closed.
| Data Source | Description | Granularity |
|---|---|---|
| jira_issues | Ticket lifecycle: created_at, started_at, merged_at, deployed_at, issue_type, team, story_points | Per issue |
| github_prs | PR open/merge timestamps, lines changed, review rounds, reviewers, revert flag | Per PR |
| deploy_log | Deployment timestamp, service, environment, rollback flag, deployment status | Per deploy |
| incident_log | Incident severity, start/end time, impacted Atlas surface, root cause category | Per incident |
| roadmap_commitments | Quarterly committed vs delivered initiatives by team | Per initiative |
| product_usage_events | Usage of shipped Atlas features: account_id, feature_name, weekly active accounts, workflow completion | Per event/account |
Assume the Platform team’s median lead time increased from 4.5 to 8.0 days, deployment frequency fell from 22 to 11 per week, change failure rate rose from 6% to 14%, and 90-day feature adoption for newly launched Atlas workflows fell from 48% to 31%.