AutoGen Integration¶
This tutorial shows you how to use FCC personas as AutoGen conversable agents. AutoGen's multi-agent conversation model allows agents to talk to each other in structured dialogues, which aligns well with FCC's cross-reference matrix of persona-to-persona interactions.
Prerequisites¶
You will also need an OpenAI API key configured in your environment or an OAI_CONFIG_LIST file.
The R.I.S.C.E.A.R. to AutoGen Mapping¶
AutoGen's AssistantAgent and ConversableAgent classes configure agent behavior primarily through a system_message. FCC's build_persona_system_prompt() generates exactly this.
| AutoGen Field | FCC Source | Rationale |
|---|---|---|
name |
persona.id |
Short identifier for message routing |
system_message |
Full R.I.S.C.E.A.R. prompt | Complete behavioral specification |
description |
persona.role_title |
Human-readable agent description |
is_termination_msg |
Derived from riscear.expected_output |
Detect when deliverables are complete |
Step 1: Create an AutoGen Agent from a Persona¶
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.prompts import build_persona_system_prompt
from autogen import AssistantAgent, UserProxyAgent
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
def fcc_to_autogen_agent(persona_id: str, llm_config: dict) -> AssistantAgent:
"""Convert an FCC PersonaSpec into an AutoGen AssistantAgent."""
persona = registry.get(persona_id)
system_message = build_persona_system_prompt(persona)
return AssistantAgent(
name=persona.id,
system_message=system_message,
llm_config=llm_config,
description=f"{persona.name} - {persona.role_title}",
)
# LLM configuration
llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "your-api-key"}],
"temperature": 0.7,
}
# Create the Research Crafter agent
rc_agent = fcc_to_autogen_agent("RC", llm_config)
print(f"Created agent: {rc_agent.name}")
Step 2: Two-Agent Conversation¶
AutoGen's core pattern is a conversation between agents. Here, a human proxy asks the Research Crafter to perform analysis:
# Create a user proxy (represents the human initiating the workflow)
user_proxy = UserProxyAgent(
name="User",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
code_execution_config=False,
)
# Start a conversation
user_proxy.initiate_chat(
rc_agent,
message="Research best practices for API rate limiting in microservices architectures.",
)
# Get the response
for msg in rc_agent.chat_messages[user_proxy]:
if msg["role"] == "assistant":
print(msg["content"][:500])
Step 3: Multi-Agent FCC Pipeline¶
Build a sequential pipeline where each persona processes the output of the previous one, mirroring the FCC Find-Create-Critique cycle:
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
# Create agents for each FCC phase
rc_agent = fcc_to_autogen_agent("RC", llm_config) # Find
bc_agent = fcc_to_autogen_agent("BC", llm_config) # Create
de_agent = fcc_to_autogen_agent("DE", llm_config) # Critique
# User proxy to initiate
user_proxy = UserProxyAgent(
name="Coordinator",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
code_execution_config=False,
is_termination_msg=lambda msg: "APPROVED" in msg.get("content", ""),
)
# Set up a group chat for the FCC cycle
group_chat = GroupChat(
agents=[user_proxy, rc_agent, bc_agent, de_agent],
messages=[],
max_round=6,
speaker_selection_method="round_robin",
)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
# Start the FCC cycle
user_proxy.initiate_chat(
manager,
message=(
"We need documentation for a new OAuth 2.0 integration. "
"Research Crafter: research the topic. "
"Blueprint Crafter: create a design document. "
"Documentation Evangelist: review and critique the output."
),
)
Step 4: Cross-Reference-Driven Speaker Selection¶
Instead of round-robin, use the cross-reference matrix to determine who speaks next. This respects FCC's defined interaction patterns:
from fcc.personas.cross_reference import CrossReferenceMatrix
matrix = CrossReferenceMatrix.from_yaml(get_personas_dir() / "cross_reference.yaml")
def fcc_speaker_selection(last_speaker, group_chat):
"""Select the next speaker based on FCC cross-reference handoffs."""
if last_speaker.name == "Coordinator":
return rc_agent # Always start with Research Crafter
# Find the next handoff target
handoffs = [
e for e in matrix.downstream(last_speaker.name)
if e.relationship_type == "handoff"
]
if not handoffs:
return None # No more handoffs; conversation ends
# Select the first handoff target that is in the group
agent_names = {a.name for a in group_chat.agents}
for handoff in handoffs:
if handoff.target_id in agent_names:
return next(a for a in group_chat.agents if a.name == handoff.target_id)
return None
# Use custom speaker selection
group_chat = GroupChat(
agents=[user_proxy, rc_agent, bc_agent, de_agent],
messages=[],
max_round=6,
speaker_selection_method=fcc_speaker_selection,
)
Step 5: Feedback Loops with AutoGen¶
FCC's feedback relationships (Critique back to Create/Find) map to AutoGen's nested chat patterns:
def create_feedback_loop(source_agent, target_agent, max_rounds=2):
"""Register a nested chat for FCC feedback interactions."""
feedback_entries = matrix.between(source_agent.name, target_agent.name)
feedback_entries = [e for e in feedback_entries if e.relationship_type == "feedback"]
if not feedback_entries:
return
interaction_desc = feedback_entries[0].interaction
source_agent.register_nested_chats(
[
{
"recipient": target_agent,
"message": (
f"Feedback ({interaction_desc}): Please review and address "
f"the following issues identified in the previous output."
),
"max_turns": max_rounds,
}
],
trigger=target_agent,
)
# Register the DE -> BC feedback loop
create_feedback_loop(de_agent, bc_agent)
Advanced: Champion as GroupChatManager¶
FCC champion personas naturally serve as AutoGen GroupChatManagers. The champion's orchestrates list defines the agents it manages:
# Build a champion-managed group
rchm = registry.get("RCHM")
team_ids = rchm.orchestrates # ["RC", "CIA", "STE", "RIC"]
team_agents = [fcc_to_autogen_agent(pid, llm_config) for pid in team_ids]
# Create a custom manager with champion persona
champion_system = build_persona_system_prompt(rchm)
research_group = GroupChat(
agents=[user_proxy] + team_agents,
messages=[],
max_round=len(team_ids) * 2,
)
champion_manager = GroupChatManager(
groupchat=research_group,
llm_config=llm_config,
system_message=champion_system,
)
user_proxy.initiate_chat(
champion_manager,
message="Conduct a comprehensive research phase for API gateway documentation.",
)
Adding Persona Constraints as Termination Conditions¶
Use R.I.S.C.E.A.R. constraints to build termination logic:
def build_termination_check(persona_id: str):
"""Build a termination checker from persona constraints."""
persona = registry.get(persona_id)
constraints = persona.riscear.constraints
def is_termination_msg(msg):
content = msg.get("content", "")
# Check if the agent has signaled completion
if "DELIVERABLE COMPLETE" in content:
return True
# Check adoption checklist items
checklist = persona.riscear.role_adoption_checklist
completed = sum(1 for item in checklist if item.lower()[:30] in content.lower())
return completed >= len(checklist) * 0.8 # 80% checklist completion
return is_termination_msg
# Apply to an agent
rc_agent = AssistantAgent(
name="RC",
system_message=build_persona_system_prompt(registry.get("RC")),
llm_config=llm_config,
is_termination_msg=build_termination_check("RC"),
)
Complete Example: Full 5-Persona AutoGen Workflow¶
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.prompts import build_persona_system_prompt
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
llm_config = {
"config_list": [{"model": "gpt-4o", "api_key": "your-api-key"}],
"temperature": 0.7,
}
# Create all five core agents
agents = {}
for pid in ["RC", "BC", "DE", "RB", "UG"]:
agents[pid] = fcc_to_autogen_agent(pid, llm_config)
user_proxy = UserProxyAgent(
name="Coordinator",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
code_execution_config=False,
)
group_chat = GroupChat(
agents=[user_proxy] + list(agents.values()),
messages=[],
max_round=10,
speaker_selection_method="round_robin",
)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
user_proxy.initiate_chat(
manager,
message="Produce complete documentation for a Kubernetes deployment pipeline.",
)
Tips for Production Use¶
- Use
ConversableAgentfor governance personas. Governance personas (DGS, PTE, AMS) can be configured withhuman_input_mode="ALWAYS"to require human approval before proceeding. - Map
coordinationrelationships to bidirectional nested chats. When two personas have a coordination relationship in the cross-reference matrix, register nested chats in both directions. - Use the Discernment Matrix for agent evaluation. After a conversation completes, use the 6 discernment traits as an evaluation rubric for agent output quality.
- Log conversations as FCC traces. Convert AutoGen's
chat_messagesto the FCC trace format for compatibility with the simulation analysis toolchain.
Related Resources¶
- Integration Overview -- General integration pattern
- LangChain Integration -- Chain-based approach
- CrewAI Integration -- Role-based crew approach
- Cross-Reference Matrix -- Persona interaction data