Abnormal AI’s Detection & Response team owns customer-facing improvements across the Abnormal Security Portal, including inbound email threat detection, post-delivery remediation, and analyst workflow automation. Over the last two quarters, the team shipped 11 features and model updates, but leadership is asking a harder question: are these releases actually improving customer outcomes, not just internal activity metrics?
You are reviewing Q2 vs Q1 performance across 1,200 enterprise tenants. Core operating metrics improved: median message processing latency fell from 2.8s to 1.9s, analyst review queue time dropped 22%, and Portal weekly active admin rate rose from 61% to 68%. However, customer-facing outcomes are mixed: prevented malicious email rate increased from 94.1% to 95.0%, false positive rate rose from 0.18% to 0.31%, median time-to-remediate post-delivery attacks improved from 47 to 29 minutes, gross logo retention stayed flat at 96%, and quarterly NPS moved from 41 to 39. The CRO argues the team is helping because renewals remain strong; Customer Success says admin trust is eroding due to false positives.
email_detection_events: message_id, tenant_id, attack_type, model_score, verdict, delivered_flag, remediated_flag, detection_timestampportal_admin_activity: tenant_id, admin_id, event_type, event_timestamp, feature_surfacecustomer_health_snapshot: tenant_id, seats, plan_tier, renewal_date, nps, support_tickets, churn_riskincident_outcomes: tenant_id, incident_id, user_reported_flag, time_to_remediate_minutes, business_email_compromise_flag, loss_estimate_usd