Simulation API¶
The fcc.simulation package provides two simulation engines (deterministic
and AI-powered), a multi-provider AI client, and a prompt template system.
sequenceDiagram
participant User
participant Engine as SimulationEngine
participant Persona as PersonaRegistry
participant AI as AIClient
participant Bus as EventBus
participant Trace as MessageHistory
User->>Engine: run(start_node, payload)
Engine->>Bus: publish(SIMULATION_STARTED)
loop BFS traversal
Engine->>Persona: resolve persona spec
Engine->>AI: complete(persona prompt)
AI-->>Engine: AIResponse
Engine->>Bus: publish(SIMULATION_STEP)
Engine->>Trace: append event
end
Engine->>Bus: publish(SIMULATION_COMPLETED)
Engine-->>User: AISimulationResult
SimulationEngine (Deterministic)¶
The deterministic engine propagates messages through a WorkflowGraph using
BFS. No API keys are required -- it operates entirely in-process and produces
a MessageHistory trace.
from fcc._resources import get_workflows_dir
from fcc.workflow.graph import WorkflowGraph
from fcc.simulation.engine import SimulationEngine
graph = WorkflowGraph.from_json(get_workflows_dir() / "base_sequence.json")
engine = SimulationEngine(graph, max_steps=200, max_history=16)
history = engine.run(start_node="RC", initial_payload="API migration plan")
# Inspect events
for event in history.events:
print(f"Step {event['step']}: {event['actor']} -- {event['edge_label']}")
Constructor Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
graph |
WorkflowGraph |
(required) | The workflow to simulate |
max_steps |
int |
200 |
Maximum BFS iterations before stopping |
max_history |
int |
16 |
Maximum annotation history depth per message |
run()¶
history = engine.run(
start_node="RC", # starting node ID
initial_payload="research", # seed payload string
)
Returns a MessageHistory whose .events list contains dicts with keys:
step, timestamp, actor, payload, origin, edge_label,
history_len, history.
run_and_save()¶
Runs the simulation and writes the trace to a JSON file.
history = engine.run_and_save(
output_path="traces.json",
start_node="RC",
initial_payload="research",
)
AISimulationEngine¶
The AI engine extends BFS message passing with LLM calls at each node. When
a PersonaRegistry is provided, it generates rich persona-aware system prompts
from the R.I.S.C.E.A.R. specification.
Running with the Mock Client (No API Key Needed)¶
from fcc._resources import get_workflows_dir, get_personas_dir
from fcc.personas.registry import PersonaRegistry
from fcc.workflow.graph import WorkflowGraph
from fcc.simulation.ai_client import AIClient, AIProvider
from fcc.simulation.ai_engine import AISimulationEngine
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
graph = WorkflowGraph.from_json(get_workflows_dir() / "base_sequence.json")
client = AIClient(provider=AIProvider.MOCK)
engine = AISimulationEngine(
graph=graph,
ai_client=client,
max_steps=50,
max_history=8,
registry=registry,
)
result = engine.run(
start_node="RC",
initial_payload="Evaluate cloud migration strategy",
scenario_id="DEMO-001",
scenario_name="Cloud migration evaluation",
)
print(f"Success: {result.success}")
print(f"Steps: {result.total_steps}")
print(f"AI calls: {result.total_ai_calls}")
print(f"Total tokens: {result.total_tokens}")
print(f"Duration: {result.duration_seconds:.2f}s")
# Inspect individual events
for event in result.events:
print(f" Step {event['step']}: {event['actor']} ({event['actor_id']})")
if "ai_response" in event:
ai = event["ai_response"]
print(f" Model: {ai['model']}, Tokens: {ai['usage']['total_tokens']}")
Constructor Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
graph |
WorkflowGraph |
(required) | The workflow to simulate |
ai_client |
AIClient or None |
None (Mock) |
AI provider client |
max_steps |
int |
50 |
Maximum BFS iterations |
max_history |
int |
8 |
Maximum message history depth |
registry |
PersonaRegistry or None |
None |
Enables rich persona prompts |
AISimulationResult¶
The run() method returns an AISimulationResult dataclass:
| Field | Type | Description |
|---|---|---|
scenario_id |
str |
Scenario identifier |
scenario_name |
str |
Human-readable name |
success |
bool |
Whether the simulation completed without error |
events |
list[dict] |
Per-step event records |
total_steps |
int |
Number of BFS steps executed |
total_ai_calls |
int |
Number of LLM API calls made |
total_tokens |
int |
Cumulative token usage |
total_latency_ms |
float |
Cumulative API latency |
duration_seconds |
float |
Wall-clock duration |
error |
str or None |
Error message if simulation failed |
Writing Traces¶
from fcc.simulation.ai_engine import write_ai_traces
# Write one or more simulation results to JSON
write_ai_traces([result], "traces_ai.json")
AIClient¶
The unified AIClient provides auto-detection of available providers
(Anthropic, OpenAI, Azure OpenAI) with fallback to a mock client, and
optional disk-based caching.
Provider Auto-Detection¶
from fcc.simulation.ai_client import AIClient
# Auto-detect from environment variables:
# ANTHROPIC_API_KEY -> Anthropic
# OPENAI_API_KEY -> OpenAI
# AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT -> Azure
# (none) -> Mock
client = AIClient()
print(client.is_available())
Explicit Provider Selection¶
from fcc.simulation.ai_client import AIClient, AIProvider
# Force a specific provider
client = AIClient(provider=AIProvider.ANTHROPIC)
client = AIClient(provider=AIProvider.OPENAI)
client = AIClient(provider=AIProvider.MOCK)
Caching¶
from fcc.simulation.ai_client import AIClient
# Enable on-disk caching (avoids duplicate API calls)
client = AIClient(use_cache=True, cache_dir=".ai_cache")
Making Calls¶
from fcc.simulation.ai_client import AIClient, AIProvider
client = AIClient(provider=AIProvider.MOCK)
# Full message-list call
response = client.complete(
messages=[
{"role": "system", "content": "You are a research analyst."},
{"role": "user", "content": "Summarize cloud migration risks."},
],
model="gpt-4o",
temperature=0.7,
max_tokens=1024,
)
print(response.content)
print(response.model)
print(response.provider) # AIProvider.MOCK
print(response.usage) # {"prompt_tokens": 50, "completion_tokens": 100, "total_tokens": 150}
print(response.latency_ms)
# Simplified call (system + user)
response = client.complete_simple(
system_prompt="You are a research analyst.",
user_message="Summarize cloud migration risks.",
)
print(response.content)
AIResponse¶
| Field | Type | Description |
|---|---|---|
content |
str |
The response text |
model |
str |
Model identifier used |
provider |
AIProvider |
Which provider was used |
usage |
dict[str, int] |
Token counts (prompt_tokens, completion_tokens, total_tokens) |
finish_reason |
str |
Completion reason (default "stop") |
cached |
bool |
Whether the response was served from cache |
latency_ms |
float |
API call latency in milliseconds |
Prompt Templates¶
The prompt system maps persona IDs to renderable templates and can generate rich system prompts from the full R.I.S.C.E.A.R. specification.
Default Prompt Templates¶
FCC ships with default prompts for the 5 core personas (RC, BC, DE, RB, UG). Any unknown persona ID falls back to a generic template.
from fcc.simulation.prompts import get_prompt_for_persona
template = get_prompt_for_persona("RC")
print(template.system_prompt) # "You are the Research Crafter ..."
print(template.user_template) # "Research and analyze the following: ..."
print(template.model) # "gpt-4o"
print(template.temperature) # 0.7
print(template.max_tokens) # 1024
# Render to messages
messages = template.to_messages(
payload="Evaluate microservices architecture",
history="(No prior context)",
actor="Research Crafter",
context="Project kickoff",
)
for msg in messages:
print(f"[{msg['role']}] {msg['content'][:80]}...")
Rich Prompts from PersonaSpec¶
When you have a full PersonaSpec (loaded from the registry), you can
generate a system prompt that incorporates all 10 R.I.S.C.E.A.R. components,
discernment traits, design target factors, and dimension profiles.
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.prompts import (
build_persona_system_prompt,
get_prompt_for_persona_spec,
)
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
rc = registry.get("RC")
# Full system prompt as a string
system_prompt = build_persona_system_prompt(rc)
print(system_prompt[:200])
# Full PromptTemplate from any PersonaSpec
template = get_prompt_for_persona_spec(rc)
messages = template.to_messages(payload="Evaluate API design", history="")
PromptTemplate Details¶
from fcc.simulation.prompts import PromptTemplate
template = PromptTemplate(
system_prompt="You are {actor}, a workflow specialist.",
user_template="Process this: {payload}\n\nContext: {context}",
model="gpt-4o",
temperature=0.5,
max_tokens=2048,
)
# Discover template variables
print(template.get_variables()) # {"actor", "payload", "context"}
# Render to a dict
rendered = template.render(
actor="Research Crafter",
payload="migration plan",
context="sprint 3",
)
print(rendered["system"]) # "You are Research Crafter, a workflow specialist."
print(rendered["user"]) # "Process this: migration plan\n\nContext: sprint 3"
# Render to a message list ready for an AI API
messages = template.to_messages(
actor="Research Crafter",
payload="migration plan",
context="sprint 3",
)
History Helpers¶
from fcc.simulation.prompts import (
build_context_from_history,
format_ai_response_as_note,
)
# Build context string from message history entries
history = [
{"actor": "Research Crafter", "note": "Identified 5 risk areas."},
{"actor": "Blueprint Crafter", "note": "Drafted architecture blueprint."},
]
context = build_context_from_history(history, max_entries=5)
print(context)
# - [Research Crafter]: Identified 5 risk areas.
# - [Blueprint Crafter]: Drafted architecture blueprint.
# Truncate and clean AI response for storage
note = format_ai_response_as_note(
"Very long AI response text " * 50,
max_length=500,
)
print(len(note) <= 503) # True (500 + "...")