FCC as a Research Instrument¶
How to use the FCC Agent Team Framework as a research methodology tool for structured inquiry, multi-perspective analysis, and collaborative knowledge construction.
Conceptual Foundation¶
The Find-Create-Critique cycle maps naturally to the scientific method:
| Scientific Method | FCC Phase | Activities |
|---|---|---|
| Observation | FIND | Literature review, data collection |
| Hypothesis formation | CREATE | Theory building, model design |
| Experimentation | CREATE | Implementation, data generation |
| Analysis | CRITIQUE | Evaluation, statistical testing |
| Peer review | CRITIQUE | Expert review, feedback incorporation |
| Iteration | REFINE | Revise based on critique |
| Publication | DELIVER | Final output, dissemination |
Multi-Perspective Analysis¶
Using Personas for Research Perspectives¶
FCC personas can represent different research perspectives on the same problem. This enables structured multi-perspective analysis:
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.engine import SimulationEngine
registry = PersonaRegistry.from_package_data()
# Select personas representing different research perspectives
perspectives = {
"empirical": "research_catalyst", # Data-driven perspective
"theoretical": "domain_expert", # Theory-driven perspective
"methodological": "build_champion", # Method-focused perspective
"ethical": "responsible_ai_guardian", # Ethics-focused perspective
}
for name, persona_id in perspectives.items():
persona = registry.get(persona_id)
print(f"\n{name.upper()} perspective ({persona.name}):")
print(f" Role: {persona.riscear.role}")
print(f" Style: {persona.riscear.style}")
print(f" Constraints: {persona.riscear.constraints}")
Structured Disagreement¶
Use the cross-reference matrix to identify natural tension points between perspectives:
from fcc.personas.cross_reference import CrossReferenceMatrix
matrix = CrossReferenceMatrix.from_package_data()
# Find feedback relationships (bidirectional critique)
entry = matrix.get("research_catalyst")
feedback_partners = entry.by_type.get("feedback", [])
print(f"Research Catalyst receives feedback from: {feedback_partners}")
Research Design Patterns¶
Pattern 1: Systematic Review Pipeline¶
FIND: Research Catalyst -> Competitive Intelligence Analyst
CREATE: Build Champion (synthesize findings)
CRITIQUE: Domain Expert -> Governance Auditor (quality check)
DELIVER: Documentation Generator (write-up)
Use this pattern for literature reviews and meta-analyses.
Pattern 2: Mixed Methods Research¶
FIND: Research Catalyst (qualitative) + Data Engineer (quantitative)
CREATE: Build Champion (integrate findings)
CRITIQUE: Domain Expert (methodological review)
+ Privacy Tech Evaluator (ethical review)
DELIVER: Documentation Generator (final report)
Use this pattern when combining qualitative and quantitative methods.
Pattern 3: Action Research¶
Cycle 1:
FIND: Observe current practices
CREATE: Design intervention
CRITIQUE: Evaluate with stakeholders
REFINE: Adjust intervention
Cycle 2:
FIND: Observe effects of intervention
CREATE: Refine intervention
CRITIQUE: Evaluate outcomes
DELIVER: Document findings
Use this pattern for participatory and action research.
Pattern 4: Grounded Theory¶
Open Coding: FIND phase with Research Catalyst
Axial Coding: CREATE phase with Build Champion
Selective Coding: CRITIQUE phase with Domain Expert
Theory Building: Knowledge graph construction
Use this pattern for qualitative research with emergent theory.
Knowledge Graph for Research¶
Modeling Research Concepts¶
from fcc.knowledge.graph import KnowledgeGraph
kg = KnowledgeGraph()
# Research question
kg.add_node("rq1", node_type="CONCEPT", metadata={
"type": "research_question",
"text": "How does X affect Y?",
})
# Hypotheses
kg.add_node("h1", node_type="CONCEPT", metadata={
"type": "hypothesis",
"text": "X positively correlates with Y",
})
kg.add_edge("rq1", "h1", edge_type="DECOMPOSES_TO")
# Methods
kg.add_node("m1", node_type="METHOD", metadata={
"name": "Survey analysis",
"sample_size": 500,
})
kg.add_edge("h1", "m1", edge_type="TESTED_BY")
# Findings
kg.add_node("f1", node_type="RESULT", metadata={
"text": "Significant positive correlation (p < 0.01)",
})
kg.add_edge("m1", "f1", edge_type="PRODUCES")
kg.add_edge("f1", "h1", edge_type="SUPPORTS")
Exporting for Publication¶
Export your research knowledge graph for supplementary materials:
from fcc.knowledge.serializers import JSONLDSerializer
serializer = JSONLDSerializer()
jsonld = serializer.serialize(kg)
# Include in supplementary materials
with open("supplementary/knowledge_graph.jsonld", "w") as f:
f.write(jsonld)
Collaboration for Peer Review¶
Setting Up a Review Session¶
from fcc.collaboration.engine import CollaborationEngine
from fcc.collaboration.models import SessionConfig
config = SessionConfig(
title="Manuscript Review - Study on X and Y",
personas=["domain_expert", "research_catalyst", "responsible_ai_guardian"],
approval_threshold=0.8,
)
engine = CollaborationEngine()
session = engine.create_session(config)
# Add reviewer comments as turns
engine.add_turn(session.id,
persona_id="domain_expert",
content="Methodology is sound but sample size justification is weak."
)
engine.add_turn(session.id,
persona_id="responsible_ai_guardian",
content="Ethical implications section needs expansion."
)
Metrics for Research Quality¶
Use the observability layer to track research process metrics:
from fcc.observability.metrics import FccMetrics
metrics = FccMetrics()
# Track research process metrics
metrics.record("papers.reviewed", 45)
metrics.record("hypotheses.tested", 3)
metrics.record("methods.applied", 2)
metrics.record("iterations.completed", 4)
metrics.record("review.score", 0.85)
Citing FCC in Publications¶
When using FCC as a research instrument, cite it as:
FCC Agent Team Framework (v1.0.1). INFORMATION COLLECTIVE, LLC.
https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext
Related Resources¶
- FAIR Workflow -- FAIR compliance
- Literature Review Agents -- Automated reviews
- Reproducibility Guide -- Reproducible workflows
- Notebook
16_knowledge_graphs.ipynb-- Knowledge graph construction - Notebook
17_rag_pipeline.ipynb-- RAG pipeline for research - Demo
open_science-- Open science workflow demonstration