Business Context
Autonomous Solutions builds autonomous vehicle software used across mining, agriculture, and industrial fleets. You are the Engineering Manager for the team that owns the ASI Mobius autonomy platform services used to dispatch missions, monitor vehicle health, and execute autonomous work cycles.
Metric Scenario
Leadership says your team "shipped a lot" last quarter, but customer feedback is mixed. In Q2, your team delivered 14 roadmap items versus 9 in Q1, yet fleet-level autonomous uptime improved only from 91.2% to 91.8%, mean incident resolution time worsened from 42 to 57 minutes, and weekly active operator accounts in Mobius stayed flat at 1,240. Meanwhile, autonomous mission completion rate increased from 84% to 88%, but customer-reported P1 incidents rose from 11 to 17. The VP of Engineering asks: How should this team define success, and which metrics best show whether the team is actually creating value?
Requirements
- Define a primary success metric for the team and explain why it should be the top KPI.
- Propose 3-5 supporting metrics, including both leading and lagging indicators.
- Show how you would decompose the primary metric to diagnose whether changes come from reliability, adoption, or workflow efficiency.
- Explain trade-offs between output metrics (features shipped) and outcome metrics (customer and operational impact).
- Identify guardrails to ensure metric improvement does not come at the expense of safety or support burden.
Data Available
- mobius_missions: mission_id, site_id, vehicle_id, planned_start_ts, actual_start_ts, completed_flag, failure_reason, autonomy_mode_minutes
- mobius_incidents: incident_id, severity, opened_ts, resolved_ts, root_cause_team, customer_reported_flag
- mobius_operator_sessions: operator_id, site_id, login_ts, session_duration_minutes, actions_taken
- vehicle_telemetry_daily: vehicle_id, site_id, autonomous_uptime_minutes, manual_override_count, fault_count
- release_changes: release_id, deploy_ts, feature_flag, service_name, rollback_flag