Company Context
You are the PM for ForgeAI, an AI-assisted coding product inside CloudForge, a public-cloud provider competing with GitHub Copilot, Amazon CodeWhisperer, JetBrains AI, and Cursor. CloudForge’s core business is enterprise cloud infrastructure; ForgeAI is a strategic bet to increase developer stickiness and expand seat-based revenue.
ForgeAI currently ships as:
- A VS Code extension and a JetBrains plugin
- Features: inline code completion, chat-based Q&A about general programming, and “explain this code”
- Pricing: bundled free for CloudForge enterprise customers up to 5 seats; $25/user/month for additional seats
Scale and stakes:
- 1.8M monthly active developers across CloudForge customers
- 210K weekly active IDE plugin users
- Enterprise customers include fintech and healthcare; SOC 2 Type II and HIPAA obligations exist for some tenants
- Leadership goal: grow ForgeAI to $60M ARR in 18 months and reduce churn in CloudForge’s developer platform
User / Market Scenario
User research (last 6 weeks) combined surveys (n=2,400), interviews (n=28), and telemetry.
Personas
| Persona | Share of WAU | Primary environment | Top jobs-to-be-done | Key anxieties |
|---|
| Enterprise Backend Engineer ("Evan") | 45% | Java/Kotlin, microservices, CI/CD | Ship features faster; reduce boilerplate; navigate large codebases | Leaking proprietary code; wrong suggestions in prod paths |
| Frontend Engineer ("Fiona") | 25% | TS/React, design systems | Generate UI scaffolding; refactor safely | Hallucinated APIs; style inconsistencies |
| Data/ML Engineer ("Drew") | 20% | Python, notebooks, pipelines | Write glue code; debug; generate tests | Dependency issues; reproducibility |
| Security/Platform Engineer ("Sam") | 10% | Policy-as-code, IaC | Enforce standards; detect insecure patterns | AI introducing vulnerabilities; auditability |
Competitive landscape insights
- Copilot is perceived as best-in-class for “fast autocomplete,” but enterprises cite policy control gaps.
- Cursor and similar tools win mindshare for “agentic refactors,” but are viewed as risky for regulated environments.
- JetBrains AI is strong for IDE-native workflows, but weaker cross-repo reasoning.
Problem / Opportunity
ForgeAI adoption is growing, but retention and trust are lagging:
- D7 retention for new plugin installs: 34% (target: 45%)
- D30 retention: 18% (target: 25%)
- Only 22% of WAU use ForgeAI more than 3 days/week
- “Suggestion accepted” rate is 14% overall (frontend: 18%, backend: 12%)
Qualitative findings:
- Developers love speed, but don’t trust correctness for critical code paths.
- Enterprises want governance: admin controls, data boundaries, audit logs.
- Teams complain ForgeAI lacks codebase context (internal libraries, patterns, conventions).
- Security teams report incidents where AI suggestions introduced insecure defaults (e.g., permissive CORS, weak crypto).
Leadership asks you a broad question in an interview-style setting:
“What are your thoughts on the future of AI-assisted coding?”
But you must translate that into a product vision and plan for ForgeAI.
Your Task (Deliverables)
Provide a structured response that covers:
- Vision (12–24 months): What does “AI-assisted coding” become for enterprise developers—autocomplete, chat, agents, codebase copilots, or something else? What is ForgeAI’s differentiated point of view?
- Target user + primary job-to-be-done: Choose a primary persona to win first. Justify with evidence and business impact.
- MVP proposal (next 8–10 weeks): Define 2–3 concrete capabilities you would ship, including UX surface (IDE, PR, CI) and how they work at a high level.
- Prioritization: Given a long list of possible bets (agentic refactor, test generation, codebase search/RAG, secure coding guardrails, admin governance, PR review assistant, onboarding), prioritize using a clear method (e.g., RICE/Kano). Explain trade-offs.
- Success metrics + experiment plan: Define what you would measure, how you would run an evaluation (A/B, phased rollout, enterprise pilots), and what thresholds would make you double down vs. stop.
Constraints
- Timeline: MVP must ship in 8–10 weeks.
- Team: 6 engineers (2 IDE, 2 backend, 1 ML, 1 security) + 1 designer.
- Infra: You can use a 3rd-party foundation model, but cannot send raw source code outside the customer’s region for regulated tenants.
- Cost: Inference budget target ≤ $0.06 per active user per day at current usage.
- Policy: Must support enterprise requirements: data retention controls, audit logs, and admin-configurable allow/deny lists for repositories.
What to assume
- You have access to IDE telemetry (acceptance rates, latency, session length), but not raw code content for all customers.
- You can run private previews with 10 design partners (mix of fintech, SaaS, healthcare).
- CloudForge leadership is willing to reposition ForgeAI if you can justify a clear wedge and moat.