Skip to content

Frequently Asked Questions

Installation

How do I install FCC?

The simplest way is from PyPI:

pip install fcc-agent-team-ext

For development, clone the repository and install in editable mode:

git clone https://github.com/rollingthunderfourtytwo-afk/l2_fcc_agent_team_ext.git
cd l2_fcc_agent_team_ext
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"

See the Installation guide for full details.

What Python versions are supported?

Python 3.10, 3.11, 3.12, and 3.13. The framework uses modern Python features including union type syntax (str | None), dataclasses with frozen=True, and importlib.resources.files(). Python 3.9 and earlier are not supported.

What are the required dependencies?

The core dependencies (installed automatically) are:

  • pyyaml -- YAML persona and governance data loading
  • jsonschema -- Schema validation for personas, workflows, traces
  • click -- CLI framework
  • jinja2 -- Template rendering for docs-as-code generation
  • anthropic -- Anthropic Claude API client for AI simulations
  • openai -- OpenAI API client for AI simulations
  • python-dotenv -- Environment variable loading from .env files

Do I need API keys to use FCC?

No. The default simulation mode is mock, which generates deterministic responses without any API calls. API keys are only needed for live AI simulations with --no-mock. See AI Provider Configuration for setup instructions.

Can I install FCC in a Docker container?

Yes. A minimal Dockerfile:

FROM python:3.12-slim
RUN pip install fcc-agent-team-ext
CMD ["fcc", "--help"]

For development with the full test suite:

FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -e ".[dev]"
CMD ["pytest"]

Usage

How do I run a simulation?

Using the CLI:

# Mock mode (default, no API keys needed)
fcc simulate --scenario GEN-001

# Live AI mode (requires ANTHROPIC_API_KEY or OPENAI_API_KEY)
fcc simulate --no-mock --scenario GEN-001

Using the Python API:

from fcc._resources import get_workflows_dir
from fcc.simulation.ai_engine import AISimulationEngine
from fcc.simulation.ai_client import AIClient, AIProvider
from fcc.workflow.graph import WorkflowGraph

graph = WorkflowGraph.from_json(str(get_workflows_dir() / "base_sequence.json"))
client = AIClient(provider=AIProvider.MOCK)
engine = AISimulationEngine(graph, ai_client=client, max_steps=20)
result = engine.run(start_node="RC", scenario_id="GEN-001")
print(f"{result.total_steps} steps, {result.total_ai_calls} AI calls")

How do I generate documentation?

fcc generate-docs --dir docs_output --personas all

This produces 1,348 documentation files (tutorials, prompts, workflows) for all 24 personas. You can filter by category (core, integration, governance, stakeholder, champions) or by a single persona ID (RC, BC, etc.).

How do I use FCC as a library instead of the CLI?

Every CLI command maps to a Python API. Import from the relevant module:

# Load personas
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())

# Query cross-references
from fcc.personas.cross_reference import CrossReferenceMatrix
matrix = CrossReferenceMatrix.from_yaml(get_personas_dir() / "cross_reference.yaml")
upstream = matrix.upstream("BC")

# Load governance data
from fcc._resources import get_governance_dir
from fcc.governance.tags import TagRegistry
tags = TagRegistry.from_yaml(str(get_governance_dir() / "tag_registry.yaml"))

What are the available CLI commands?

Command Purpose
fcc init Scaffold a new FCC project
fcc add-persona Add a persona to a project
fcc validate Validate project structure
fcc simulate Run a workflow simulation
fcc generate-docs Generate docs-as-code output
fcc validate-docs Validate generated documentation
fcc sitemap Generate a SITEMAP.md

Run fcc --help or fcc <command> --help for details. See the CLI Reference for full documentation.

Architecture

Why dataclasses instead of Pydantic?

The framework uses dataclasses with frozen=True for all model classes. The decision factors:

  • Zero extra dependencies. Dataclasses are part of the standard library. Pydantic adds a significant dependency tree and build complexity (especially v2 with its Rust extension).
  • Explicit validation. Each model provides a from_dict() classmethod that validates input explicitly, making the validation logic transparent and testable.
  • Immutability. frozen=True dataclasses are hashable and guarantee that persona specs and workflow nodes cannot be mutated after construction.
  • Simplicity. The data shapes are well-defined YAML/JSON structures validated by JSON Schema at load time. Pydantic's runtime validation is redundant when schema validation already occurs.

Why Click instead of argparse?

  • Composable commands. Click's @group and @command decorators make it straightforward to add new subcommands without modifying a central argument parser.
  • Automatic help generation. Click produces clean --help output with docstrings, parameter types, and defaults.
  • Testing support. Click's CliRunner enables testing CLI commands without subprocess calls.
  • Type handling. Click's parameter types (Choice, Path, IntRange) catch errors before they reach application code.

Why are there three workflow graphs?

The three graphs serve different use cases:

Graph Nodes File Purpose
Base 5 base_sequence.json Core FCC cycle with 5 personas. Best for learning and quick simulations.
Extended 20 extended_sequence.json Adds integration, governance, and stakeholder personas. Represents a full organizational workflow.
Complete 24 complete_24.json Adds champion personas with orchestration edges. Full framework demonstration.

What is R.I.S.C.E.A.R.?

R.I.S.C.E.A.R. is a 10-component persona specification format:

  1. Role -- What the persona does
  2. Input -- What the persona consumes
  3. Style -- How the persona communicates
  4. Constraints -- Boundaries and rules
  5. Expected Output -- What the persona produces
  6. Archetype -- The persona's identity metaphor
  7. Responsibilities -- Ongoing duties
  8. Role Skills -- Technical and domain competencies
  9. Role Collaborators -- Interaction partners
  10. Role Adoption Checklist -- Readiness criteria

See the R.I.S.C.E.A.R. Specification for the full definition.

Comparison

How does FCC compare to LangChain?

FCC and LangChain solve different problems:

  • LangChain is a general-purpose LLM orchestration library for building chains, agents, and RAG pipelines. It focuses on connecting language models to tools and data sources.
  • FCC is a persona-driven documentation and workflow framework. It models teams of specialized personas with defined roles, constraints, and interaction patterns. The simulation engine traverses a workflow graph rather than running open-ended agent loops.

FCC is not a replacement for LangChain. If you need a general-purpose LLM toolkit, use LangChain. If you need structured multi-persona documentation workflows with governance and quality gates, use FCC.

How does FCC compare to CrewAI?

CrewAI and FCC both model teams of AI agents, but differ in scope and approach:

  • CrewAI focuses on runtime task execution: agents receive goals, use tools, and collaborate to produce results. It is optimized for dynamic, open-ended tasks.
  • FCC focuses on structured workflows with pre-defined persona specifications. The workflow graph, quality gates, and cross-reference matrix provide formal governance that CrewAI's free-form delegation does not.

FCC's strength is reproducibility and auditability. Every simulation produces a deterministic trace when run in mock mode. CrewAI's strength is flexibility for ad-hoc tasks.

How does FCC compare to AutoGen?

AutoGen (Microsoft) enables multi-agent conversations where agents can write and execute code. FCC differs in that:

  • FCC personas follow a fixed workflow graph rather than engaging in open-ended conversation.
  • FCC emphasizes documentation generation and governance, not code execution.
  • FCC's simulation engine can run entirely deterministically (mock mode) for testing and CI.

Data

Where are the persona YAML files?

Inside the package at src/fcc/data/personas/. When installed from a wheel, they are under <site-packages>/fcc/data/personas/. Use fcc._resources.get_personas_dir() to get the path programmatically:

from fcc._resources import get_personas_dir
print(get_personas_dir())

How do I add a new persona?

  1. Create or edit a YAML file in src/fcc/data/personas/ with the full R.I.S.C.E.A.R. specification.
  2. Add the persona as a node in the appropriate workflow JSON file under src/fcc/data/workflows/.
  3. Optionally add cross-reference entries in src/fcc/data/personas/cross_reference.yaml.
  4. Optionally add a dimension profile in src/fcc/data/personas/dimensions/.
  5. Run fcc validate and fcc generate-docs --personas <ID> to verify.

See the Extension Guide for a complete walkthrough with examples.

How do I use my own data files instead of the bundled ones?

All loader methods accept explicit paths. Pass your custom directory instead of the default:

from fcc.personas.registry import PersonaRegistry

# Load from your own directory
registry = PersonaRegistry.from_yaml_directory("/path/to/my/personas")

Can I export persona specs to other formats?

The persona specs are standard Python dataclasses. You can serialize them to any format:

import json
from dataclasses import asdict
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry

registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
persona = registry.get("RC")
print(json.dumps(asdict(persona), indent=2))

Troubleshooting

The fcc command is not found after installation

Make sure the package is installed in your active virtual environment:

which python   # Should point to your venv
pip show fcc-agent-team-ext
fcc --help

If using a system Python without a virtual environment, the script may be installed in a directory not on your PATH. Use python -m fcc.scaffold.cli as an alternative.

I get ModuleNotFoundError: No module named 'fcc'

The package is either not installed or installed in a different Python environment. Check:

pip list | grep fcc
python -c "import fcc; print(fcc.__version__)"

Tests fail after upgrading

Run make clean-cache to clear stale .pyc files and __pycache__ directories, then re-run:

make clean-cache
make test

If failures persist, check the Changelog for breaking changes and update your test code accordingly.

See the Troubleshooting guide for more detailed solutions.

Next Steps