Skip to content

Case Study Template

This document provides a structured template for documenting FCC implementation case studies, followed by two completed examples.


Template

Use this structure when documenting how FCC was applied to a specific problem domain.

[Case Study Title]

Domain: [Industry or problem area]

Date: [When the case study was conducted]

Team Size: [Number of people involved]

FCC Version: [Version used]

Context

Describe the organizational context, the team composition, and the motivation for using FCC. What was the state of the existing process before FCC was introduced?

Problem

What specific problem was FCC intended to solve? Be precise about the pain points, their frequency, and their business impact.

Approach

Describe the FCC configuration used:

  • Personas selected: Which personas were used and why? Were custom personas created?
  • Workflow graph: Which workflow graph was selected (5-node, 20-node, 24-node, or custom)?
  • Governance: What constitution rules and quality gates were configured?
  • Simulation mode: Mock or AI-powered? Which AI provider?
  • Collaboration: Were human-in-the-loop sessions used? What approval gates were set?

Personas Used

Persona ID Name Role in This Case Study
... ... ...

Results

Report measurable outcomes:

  • Quality metrics: Average quality scores from the scoring engine
  • Time metrics: Time to completion vs. baseline
  • Coverage metrics: Percentage of requirements addressed
  • Governance metrics: Gate pass rates, rule violation counts
  • Qualitative outcomes: Team satisfaction, adoption feedback

Lessons Learned

  • What worked well?
  • What was more difficult than expected?
  • What would you change for the next iteration?
  • What FCC features were most and least valuable?

Recommendations

Based on this experience, what guidance would you give to another team considering FCC for a similar problem?


Example 1: Enterprise Documentation Migration

Domain: Enterprise Software (B2B SaaS)

Date: January 2026

Team Size: 4 (2 technical writers, 1 architect, 1 product manager)

FCC Version: 0.7.0

Context

A mid-size SaaS company maintained 1,200 pages of product documentation across Confluence, Google Docs, and a legacy wiki. Documentation was inconsistent in style, outdated in content, and lacked a governance process. The team needed to consolidate all documentation into a single docs-as-code system (MkDocs) while enforcing quality standards.

Problem

  • 40% of documentation pages had not been updated in over 12 months
  • No standard style guide or review process existed
  • Customer support tickets cited inaccurate documentation as the second-most common complaint (after bugs)
  • Migration estimates ranged from 3 to 6 months using manual processes

Approach

The team used FCC to structure the migration as a multi-phase workflow:

  • Personas selected: 5 personas covering the full FCC cycle
  • RC (Research Coordinator) -- Find phase: audit existing docs, categorize by topic and freshness
  • SA (Solution Architect) -- Create phase: draft migration plan and page templates
  • WA (Writing Analyst) -- Create phase: rewrite pages in consistent Markdown
  • QA (Quality Assurance) -- Critique phase: review pages against style guide
  • GA (Governance Auditor) -- Critique phase: verify compliance with documentation standards
  • Workflow graph: Custom 12-node graph mapping audit, planning, writing, review, and publication stages
  • Governance: 3 hard-stop rules (no placeholder text, all code samples must compile, API endpoints must be current), 5 mandatory patterns (heading structure, cross-linking, version stamps)
  • Simulation mode: AI-powered (Claude) for first-draft generation, mock mode for testing the workflow
  • Collaboration: 4 human-in-the-loop sessions, each with 2 approval gates (technical accuracy and style compliance)

Personas Used

Persona ID Name Role in This Case Study
RC Research Coordinator Audited 1,200 pages, categorized by topic and staleness
SA Solution Architect Designed migration architecture and page templates
WA Writing Analyst Rewrote pages in standardized Markdown format
QA Quality Assurance Reviewed each page against the 8-point style checklist
GA Governance Auditor Verified hard-stop compliance before publication

Results

  • Quality: Average quality score rose from 2.1 (pre-migration) to 4.3 (post-migration) on the 1-5 scale
  • Time: Migration completed in 6 weeks (vs. 3-6 month estimate for manual process)
  • Coverage: 100% of active pages migrated; 180 obsolete pages identified and archived
  • Governance: 12 hard-stop violations caught during review (all fixed before publication)
  • Customer impact: Documentation-related support tickets decreased by 35% in the 3 months following migration

Lessons Learned

  • What worked well: The governance hard-stop rules caught significant issues early. AI-generated first drafts saved approximately 60% of writing time.
  • What was more difficult than expected: Mapping existing page hierarchies to a consistent heading structure required manual judgment that the AI struggled with.
  • What would you change: Add a dedicated persona for cross-reference validation (many internal links broke during migration).
  • Most valuable features: Quality gates, collaboration sessions with approval gates, docs-as-code generator.

Recommendations

For documentation migration projects: 1. Start with a full audit using the Find phase before planning the migration 2. Define hard-stop governance rules for the most critical quality requirements 3. Use AI-powered simulation for first drafts but always include human review gates 4. Create a custom persona for link validation if migrating between systems with different URL structures


Example 2: Research Literature Review Automation

Domain: Academic Research (Computer Science)

Date: February 2026

Team Size: 2 (1 PhD student, 1 faculty advisor)

FCC Version: 0.8.0

Context

A PhD student was conducting a systematic literature review for a dissertation on federated learning privacy. The review scope included 300+ papers from 5 databases (IEEE, ACM, arXiv, Springer, Elsevier). The student needed to categorize papers by methodology, identify research gaps, and synthesize findings into a structured narrative.

Problem

  • Manual paper screening was taking approximately 15 minutes per paper, projecting to 75+ hours for the full corpus
  • Inconsistent categorization criteria led to reclassification cycles
  • The advisor required a structured taxonomy of research themes that evolved as new papers were added
  • Synthesis of findings across 300+ papers was overwhelming without a systematic framework

Approach

The team used FCC to structure the literature review as a multi-agent workflow:

  • Personas selected: 4 personas customized for academic research
  • RC (Research Coordinator) -- Find phase: search strategy, database queries, deduplication
  • Custom persona "LRS" (Literature Review Specialist) -- Find phase: paper screening and categorization
  • Custom persona "TSS" (Thematic Synthesis Specialist) -- Create phase: thematic analysis and gap identification
  • QA (Quality Assurance) -- Critique phase: verify categorization consistency and citation accuracy
  • Workflow graph: Custom 10-node graph covering search, screen, categorize, analyze, synthesize, and review stages
  • Governance: 2 hard-stop rules (every paper must have a DOI or stable URL, categorization must use the predefined taxonomy), 3 mandatory patterns
  • Simulation mode: AI-powered for paper summarization and categorization, mock mode for workflow testing
  • Collaboration: 6 collaboration sessions over 4 weeks, with the advisor reviewing categorizations at each gate

Personas Used

Persona ID Name Role in This Case Study
RC Research Coordinator Designed search strategy, managed database queries
LRS Literature Review Specialist (custom) Screened abstracts, applied inclusion/exclusion criteria
TSS Thematic Synthesis Specialist (custom) Identified themes, analyzed gaps, drafted synthesis
QA Quality Assurance Verified categorization consistency and citation format

Results

  • Quality: Categorization consistency score: 4.5/5.0 (measured by inter-rater agreement between AI and human)
  • Time: Full review completed in 3 weeks (vs. estimated 8 weeks for manual review)
  • Coverage: 312 papers screened, 187 included, 125 excluded with documented rationale
  • Governance: 8 hard-stop violations (missing DOIs) caught and resolved
  • Taxonomy: 6 top-level themes and 23 sub-themes identified, with 4 research gaps documented

Lessons Learned

  • What worked well: The RAG pipeline was highly effective for answering questions about the corpus (e.g., "Which papers address differential privacy in federated learning?"). The knowledge graph provided a visual map of research themes.
  • What was more difficult than expected: The AI occasionally miscategorized papers when the abstract was ambiguous. Human review gates were essential for borderline cases.
  • What would you change: Use the semantic search index from the start (added in week 2) rather than keyword search. Build the knowledge graph incrementally rather than as a batch process at the end.
  • Most valuable features: RAG pipeline for corpus Q&A, knowledge graph for theme visualization, collaboration sessions for advisor review.

Recommendations

For systematic literature reviews: 1. Define the categorization taxonomy before starting screening 2. Use persona-aware RAG queries to answer specific research questions about the corpus 3. Build the knowledge graph incrementally as papers are categorized 4. Schedule regular advisor review sessions using collaboration engine approval gates 5. Export the knowledge graph to JSON-LD for integration with reference management tools


See Also