Skip to content

Research Methodology with FCC

The FCC Agent Team Framework provides a structured, reproducible environment for studying multi-agent collaboration, persona-based prompt engineering, and documentation workflow automation. This guide covers how to use FCC as a research instrument, design experiments around its components, and cite the framework in academic publications.

FCC as a Research Platform

FCC is not a black-box agent system. Every component is specified declaratively in YAML and JSON, loaded into Python dataclasses, and executed through a deterministic or AI-powered simulation engine. This transparency makes it suitable for several research agendas:

  • Multi-agent coordination studies -- 24 personas with 106 defined interactions across 5 relationship types (handoff, feedback, coordination, governance, champion-of)
  • Persona-based prompt engineering -- 10-component R.I.S.C.E.A.R. specifications that translate into system prompts, enabling ablation studies on which components matter most
  • Workflow graph analysis -- Three workflow graphs (5-node, 20-node, 24-node) with well-defined traversal semantics (BFS)
  • Quality assurance frameworks -- 28 quality gates with defined severity levels and pass/fail criteria
  • Behavioral profiling -- 56 persona dimensions across 9 categories, plus the 6-trait Discernment Matrix and 6-factor Design Target Factors

Experimental Design with Deterministic Simulation

The simulation engine's deterministic mode produces predictable, reproducible traces -- essential for controlled experiments.

Setting Up a Controlled Experiment

from fcc._resources import get_personas_dir, get_scenarios_dir
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.engine import SimulationEngine

# Load the full persona registry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())

# Create a deterministic simulation engine
engine = SimulationEngine(registry=registry, mode="deterministic")

# Run the same scenario multiple times
traces = []
for trial in range(10):
    trace = engine.run_scenario("GEN-001")
    traces.append(trace)

# All traces should be identical in deterministic mode
assert all(t == traces[0] for t in traces)

Independent Variables

FCC provides natural experimental controls:

Variable How to Manipulate What to Measure
Persona count Use base (5), extended (20), or complete (24) workflow Output quality, coverage
R.I.S.C.E.A.R. components Ablate specific fields (remove constraints, style, etc.) Behavioral deviation
Workflow graph Swap between three provided graphs Traversal efficiency, message count
Quality gate thresholds Adjust gate severity and pass criteria False positive/negative rates
Simulation mode Compare deterministic vs. AI-powered Response variance, quality

Dependent Variables

Measure outcomes through the trace format:

  • Coverage -- How many personas were activated, which FCC phases were visited
  • Message count -- Total messages exchanged, messages per persona
  • Quality scores -- Pass/fail rates on quality gates
  • Feedback loops -- Number of critique-to-create/find cycles before convergence
  • Latency -- Time per persona activation in AI-powered mode

Trace Analysis Methodology

Every simulation run produces a JSON trace conforming to data/schemas/trace.schema.json. Traces are the primary data artifact for analysis.

Trace Structure

{
  "scenario_id": "GEN-001",
  "workflow_graph": "base_sequence",
  "mode": "deterministic",
  "started_at": "2026-03-15T10:00:00Z",
  "completed_at": "2026-03-15T10:00:01Z",
  "entries": [
    {
      "persona_id": "RC",
      "phase": "Find",
      "input": "...",
      "output": "...",
      "timestamp": "2026-03-15T10:00:00.100Z"
    }
  ],
  "validation_results": [
    {
      "rule": "fcc_phase_coverage",
      "passed": true
    }
  ]
}

Quantitative Analysis

import json
from collections import Counter

with open("trace_output.json") as f:
    trace = json.load(f)

# Phase distribution
phases = Counter(e["phase"] for e in trace["entries"])
print(f"Phase distribution: {dict(phases)}")

# Messages per persona
by_persona = Counter(e["persona_id"] for e in trace["entries"])
print(f"Activations per persona: {dict(by_persona)}")

# Quality gate pass rate
gates = trace.get("validation_results", [])
pass_rate = sum(1 for g in gates if g["passed"]) / len(gates) if gates else 0
print(f"Quality gate pass rate: {pass_rate:.1%}")

The Discernment Matrix as a Research Instrument

The Discernment Matrix provides a multi-rater evaluation framework with 6 traits and 7 rating dimensions. This structure is useful for:

Inter-Rater Reliability Studies

The 7 rating dimensions (self, peer, survey, individual weighted, organizational, external, ranked percentile) enable inter-rater reliability analysis. Compare self-ratings against peer ratings to measure self-awareness accuracy:

from fcc.personas.models import PersonaSpec

persona = registry.get("RC")
for trait in persona.discernment_matrix:
    if trait.ratings.self_rating and trait.ratings.peer_rating:
        gap = trait.ratings.self_rating - trait.ratings.peer_rating
        print(f"{trait.name}: self={trait.ratings.self_rating}, "
              f"peer={trait.ratings.peer_rating}, gap={gap:+.1f}")

Trait Correlation Analysis

Examine whether certain discernment traits correlate with persona effectiveness:

  • Does higher Curiosity correlate with more thorough research outputs?
  • Does higher Responsibility correlate with better compliance gate pass rates?
  • Does higher Inclusivity correlate with more diverse stakeholder coverage?

Persona Dimensions for Behavioral Analysis

The 56-dimension persona profiling system provides a rich feature space for quantitative behavioral analysis.

Dimension Categories as Factor Groups

Category Dimensions Research Application
Core Persona Elements 7 Agent identity and role analysis
Behavioral and Motivational 6 Tool adoption and risk tolerance studies
Communication and Learning 4 Information flow pattern analysis
Cultural and Social 4 Cross-cultural interaction dynamics
Decision-Making and Leadership 5 Coordination pattern studies
Professional Development 5 Agent sustainability and growth modeling
Market and Regulatory 5 Compliance behavior analysis
Innovative Elements 10 Innovation diffusion studies
Advanced Attributes 10 Ecosystem role and governance analysis

Feature Extraction for Machine Learning

from fcc.personas.dimensions import PersonaDimensionProfile

# Extract dimension features for a persona
persona = registry.get("RC")
profile = persona.dimension_profile

if profile:
    features = {}
    for category in profile.populated_categories:
        dims = profile.dimensions_for_category(category)
        for dim in dims:
            for attr in dim.attributes:
                if attr.value:
                    features[f"{category}.{dim.name}.{attr.name}"] = attr.value

    print(f"Extracted {len(features)} dimension features for {persona.name}")

Cross-Reference Matrix for Network Analysis

The 106-entry cross-reference matrix is a directed graph suitable for social network analysis techniques.

Network Metrics

  • Degree centrality -- Which personas have the most connections?
  • Betweenness centrality -- Which personas are critical intermediaries?
  • Clustering coefficient -- How tightly connected are persona subgroups?
  • Hub-and-spoke patterns -- Do champion personas act as network hubs?
from fcc.personas.cross_reference import CrossReferenceMatrix

matrix = CrossReferenceMatrix.from_yaml(get_personas_dir() / "cross_reference.yaml")

# Compute out-degree (downstream connections) for each persona
out_degree = {}
for pid in matrix.all_persona_ids():
    out_degree[pid] = len(matrix.downstream(pid))

# Sort by connectivity
for pid, degree in sorted(out_degree.items(), key=lambda x: -x[1]):
    print(f"{pid}: {degree} downstream connections")

Citing FCC in Academic Papers

BibTeX

@software{fcc_agent_team_2026,
  title     = {FCC Agent Team Extension: A Multi-Persona Documentation Workflow Framework},
  author    = {{Information Collective, LLC}},
  year      = {2026},
  url       = {https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext},
  version   = {0.1.0},
  license   = {MIT}
}

APA 7th Edition

Information Collective, LLC. (2026). FCC Agent Team Extension: A multi-persona documentation workflow framework (Version 0.1.0) [Computer software]. GitHub. https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext

See the Citation Guide for additional formats.

Ethical Considerations

When conducting research with FCC:

  1. Disclose AI involvement. If using AI-powered simulation mode, clearly state which LLM provider and model were used.
  2. Report configuration. Include the FCC version, persona registry version, workflow graph, and quality gate configuration in your methods section.
  3. Respect the Discernment Matrix values. The six traits (Humility, Professional Background, Curiosity, Taste, Inclusivity, Responsibility) encode ethical principles. Research designs should honor these values.
  4. Version pin all dependencies. Ensure exact reproducibility by locking the FCC package version and all transitive dependencies.