Skip to content

R.I.S.C.E.A.R. Deep Dive for Researchers

This document provides a formal description of the R.I.S.C.E.A.R. persona specification framework, its dimension profiling methodology, and guidance for using the FCC event replay system for reproducible multi-agent research.

1. The 10-Component Specification

R.I.S.C.E.A.R. is a structured specification language for defining AI agent personas. Each persona is fully determined by ten components:

# Component Field Type Description
1 Role role str The identity and function assigned to the agent. Describes what the persona does within the FCC workflow.
2 Inputs inputs list[str] Required data, facts, and background information the persona needs to perform its role.
3 Style style str Communication conventions: tone, language register, formatting rules, and presentation guidelines.
4 Constraints constraints list[str] Boundaries, limitations, and mandatory rules that govern the persona's output.
5 Expected Output expected_output list[str] The structure, format, and detail level of artifacts the persona must produce.
6 Archetype archetype str The fundamental behavioral model the persona embodies (e.g., "The Methodical Organizer").
7 Responsibilities responsibilities list[str] Ongoing duties and ethical commitments beyond immediate deliverables.
8 Role Skills role_skills list[str] Specific competencies required for effective execution.
9 Role Collaborators role_collaborators list[str] Upstream and downstream interaction partners, referenced by persona ID.
10 Role Adoption Checklist role_adoption_checklist list[str] Validation criteria that must be satisfied before the persona is considered operational.

Formal Representation

A persona P is defined as a tuple:

P = (R, I, S, C, E, A, Resp, Skills, Collab, Checklist)

where each component maps to a field in the RISCEARSpec dataclass (src/fcc/personas/models.py). The specification is loaded from YAML and instantiated as a frozen (immutable) dataclass, ensuring referential integrity throughout the system.

Implementation Reference

from fcc.personas.models import RISCEARSpec, PersonaSpec

# Load a persona from the registry
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_data_dir("src/fcc/data/personas")
persona = registry.get("RC")  # Research Crafter

# Access R.I.S.C.E.A.R. components
spec = persona.riscear
print(f"Role: {spec.role}")
print(f"Archetype: {spec.archetype}")
print(f"Inputs: {spec.inputs}")
print(f"Constraints: {spec.constraints}")
print(f"Expected Output: {spec.expected_output}")
print(f"Responsibilities: {spec.responsibilities}")
print(f"Skills: {spec.role_skills}")
print(f"Collaborators: {spec.role_collaborators}")
print(f"Adoption Checklist: {spec.role_adoption_checklist}")

2. Behavioral Profiling: Discernment Matrix and Design Target Factors

Beyond the functional R.I.S.C.E.A.R. specification, each persona is characterized by two behavioral models.

2.1 Discernment Matrix

Six traits rated across seven dimensions:

Trait Construct
Humility Acknowledgment of biases, limitations, and others' perspectives
Professional Background Domain expertise and professional context
Curiosity Drive to explore and consider new perspectives
Taste Refined judgment and aesthetic sensibility
Inclusivity Respect for diverse beliefs, cultures, and experiences
Responsibility Application of discernment for equitable outcomes

2.2 Design Target Factors

Six interpersonal factors modeled on the "Super Connector" archetype:

Factor Construct
Optimism Technology and connectivity as tools for positive change
Social Connectivity Leveraging relationships via networks
Influence Acting as a catalyst for action within professional networks
Diversity Appreciation Valuing diverse cultures, thoughts, and people
Curiosity Lifelong learning and intellectual openness
Leadership Entrepreneurial mindset and natural leadership

2.3 Seven Rating Dimensions

Both the Discernment Matrix and Design Target Factors use a shared seven-dimension rating model:

Dimension Description
self_rating Self-assessment by the persona
peer_rating Assessment by collaborating personas
survey_rating Aggregated survey-based evaluation
individual_weighted_rating Weighted composite of individual assessments
org_rating Organizational-level evaluation
external_rating Assessment from external stakeholders
ranked_percentile_rating Normalized percentile ranking across the ecosystem

This multi-rater model is implemented as the RatingDimensions frozen dataclass.

3. Persona Dimension Profiling Methodology

The deepest level of persona specification is the 56-dimension profile organized into 9 categories:

# Category Dimensions Focus
1 Core Persona Elements 7 Agent profile, organizational role, decision authority
2 Behavioral and Motivational Factors 6 Tool adoption, framework preferences, risk tolerance
3 Communication and Learning Styles 4 Channels, information sources, learning preferences
4 Cultural and Social Influences 4 Operational heritage, protocol proficiency, platform engagement
5 Decision-Making and Leadership Approaches 5 Decision style, problem-solving, conflict resolution
6 Professional Development and Wellness 5 Mentorship, growth, sustainability, cross-project mobility
7 Market and Regulatory Awareness 5 Trends, competition, regulations, ethics
8 Innovative Persona Elements 10 Output trace analysis, innovation rate, crisis management
9 Advanced Persona Attributes 10 Ecosystem role, resource budget, RACI, data governance

Interpretation Guide

14 of the 56 dimensions were originally designed for consumer persona modeling and have been reinterpreted for AI documentation agents. For example:

Original Dimension AI Agent Interpretation
Demographic Information Agent Profile
Purchasing Behavior Tool/Resource Adoption Patterns
Income Level Resource Budget / Compute Allocation

The full mapping with rationale is in data/personas/dimension_interpretation_guide.yaml.

Accessing Dimension Profiles

from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_data_dir("src/fcc/data/personas")
persona = registry.get("RC")

if persona.dimension_profile:
    profile = persona.dimension_profile
    for cat_name in profile.CATEGORY_NAMES:
        dims = getattr(profile, cat_name)
        print(f"{cat_name}: {len(dims)} dimensions")
        for dim in dims[:2]:  # Show first 2
            print(f"  - {dim.name}: {dim.description[:60]}...")

4. Reproducible Research with Event Replay

The FCC messaging system provides a complete audit trail suitable for reproducible multi-agent research.

4.1 Recording Events

from fcc.messaging.bus import EventBus
from fcc.messaging.serialization import EventSerializer

bus = EventBus()
bus.start_recording()

# Run your experiment (simulation, action execution, collaboration session)
# ... all events are automatically captured ...

bus.stop_recording()
history = bus.get_history()

# Persist the event log
EventSerializer.save(history, "experiment_events.json")
print(f"Recorded {len(history)} events")

4.2 Replaying for Verification

Another researcher can load the event log and replay it to verify findings:

from fcc.messaging.bus import EventBus
from fcc.messaging.serialization import EventSerializer, EventReplay

# Load the recorded events
events = EventSerializer.load("experiment_events.json")

# Set up analysis subscribers
persona_activations = []
gate_results = []

bus = EventBus()
bus.subscribe(lambda e: persona_activations.append(e)
    if e.event_type.value.startswith("persona.") else None)
bus.subscribe(lambda e: gate_results.append(e)
    if e.event_type.value.startswith("governance.") else None)

# Replay
replayer = EventReplay(bus)
total = replayer.replay(events)
print(f"Replayed {len(events)} events, {total} subscriber deliveries")

4.3 Filtered Replay for Hypothesis Testing

Replay only events matching specific criteria to isolate experimental conditions:

# Replay only events from a specific simulation run
replayer.replay_filtered(
    events,
    correlation_id="experiment-run-001",
)

# Replay only events from the ActionEngine
replayer.replay_filtered(
    events,
    source="ActionEngine",
)

4.4 Session Replay

Collaboration sessions can be replayed as structured event sequences:

from fcc.collaboration.recording import SessionRecorder
from fcc.collaboration.models import CollaborationSession

session = SessionRecorder.load_json("session_data.json")

bus = EventBus()
analysis_events = []
bus.subscribe(lambda e: analysis_events.append(e))

deliveries = SessionRecorder.replay_session(session, bus)
print(f"Session {session.session_id}: {len(session.turns)} turns, "
      f"{deliveries} event deliveries")

5. Cross-Reference Matrix for Interaction Analysis

The CrossReferenceMatrix enables systematic analysis of persona-to-persona interaction patterns:

from fcc.personas.cross_reference import CrossReferenceMatrix

# Load from YAML
# matrix = CrossReferenceMatrix.from_yaml("data/personas/cross_reference.yaml")

# Or auto-generate from persona collaboration links
# matrix = CrossReferenceMatrix.from_personas(registry)

# Query interaction patterns
# upstream = matrix.upstream("BC")   # Who feeds into Blueprint Crafter?
# downstream = matrix.downstream("RC")  # Where does Research Crafter's output go?
# peers = matrix.peers("DE")  # Who collaborates laterally with Doc Evangelist?

6. Citation Information

When referencing the FCC framework or R.I.S.C.E.A.R. specification in academic work:

Suggested Citation

INFORMATION COLLECTIVE, LLC. (2026). FCC Agent Team Extension: A Framework for Multi-Agent Documentation Workflows with R.I.S.C.E.A.R. Persona Specifications (Version 0.5.0) [Software]. GitHub. https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext

BibTeX

@software{fcc_agent_team_2026,
  author       = {{INFORMATION COLLECTIVE, LLC}},
  title        = {{FCC Agent Team Extension: A Framework for Multi-Agent
                   Documentation Workflows with R.I.S.C.E.A.R. Persona
                   Specifications}},
  year         = {2026},
  version      = {0.5.0},
  url          = {https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext},
  license      = {MIT}
}

Key Framework Attributes for Reporting

When describing the framework in a methods section, include:

  • Persona count: 102 core + 45 vertical + 23 plugin personas across 20 core categories and 6 vertical packs (170 total; 147 core+vertical)
  • Specification model: 10-component R.I.S.C.E.A.R.
  • Behavioral profiling: 6-trait Discernment Matrix + 6-factor Design Target Factors, each rated on 7 dimensions
  • Dimension profiling: 9 categories, 56 dimensions
  • Workflow graphs: 5-node (base), 20-node (extended), 24-node (complete), 55-node (extended_84)
  • Action types: 6 (scaffold, refactor, debug, test, compare, document)
  • Event types: 25 across 8 categories
  • Quality gates: 25 across all persona categories

Next Steps