At Meta, a FAIR research team has spent 4 months developing a multimodal ranking approach intended to improve long-term content understanding for Facebook Feed. Midway through the project, leadership shifts priority to Instagram Reels after new evidence shows the method performs materially better on short-form video recommendations than on Feed ranking. You are the research scientist leading execution across research, applied ML, and product partners.
The core team includes 4 research scientists, 3 ML engineers, 1 data scientist, and 1 product manager. There are 10 weeks left before the Reels org's quarterly planning checkpoint, where leadership expects either a credible launch-ready plan or a recommendation to stop investment. This pivot matters because Reels watch time is a top company priority, and the original Feed roadmap already consumed most of the allocated quarter.
The Reels ML Director wants an online experiment in this quarter. The Feed PM wants to preserve at least part of the original research investment. Infra engineering is concerned about serving cost on Instagram's ranking stack. Responsible AI reviewers require a fairness and integrity assessment before any launch recommendation.
You have $180K of remaining compute budget, no additional headcount, and only 10 weeks. The model currently increases offline Reels ranking quality by 3.8% but adds 22 ms p95 inference latency versus a hard budget of 10 ms. One of the 3 ML engineers is committed 50% to an existing integrity incident. Data labeling support is capped at 1,200 additional videos.