LangChain Integration¶
This tutorial shows you how to use FCC personas as LangChain agents. You will load a persona from the FCC registry, extract its R.I.S.C.E.A.R. specification, build a system prompt, and create a fully configured LangChain agent. By the end, you will have a working multi-agent pipeline driven by FCC persona definitions.
Prerequisites¶
You will also need an OpenAI API key (or substitute any LangChain-compatible LLM).
Step 1: Load a Persona¶
Start by loading the FCC persona registry and selecting a persona.
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
rc = registry.get("RC") # Research Crafter
print(f"Loaded: {rc.name} ({rc.id})")
print(f"Phase: {rc.fcc_phase}")
print(f"Archetype: {rc.riscear.archetype}")
Output:
Step 2: Build the System Prompt¶
FCC provides a built-in function that assembles all R.I.S.C.E.A.R. components into a rich system prompt.
from fcc.simulation.prompts import build_persona_system_prompt
system_prompt = build_persona_system_prompt(rc)
print(system_prompt[:300])
The generated prompt includes the persona's role, archetype, style, responsibilities, constraints, expected output, skills, collaboration patterns, discernment traits, and design target factors. This gives the LLM comprehensive behavioral guidance.
If you prefer a minimal prompt, you can compose one manually from selected R.I.S.C.E.A.R. fields:
def build_minimal_prompt(persona):
r = persona.riscear
constraints = "\n".join(f"- {c}" for c in r.constraints)
outputs = "\n".join(f"- {o}" for o in r.expected_output)
return (
f"You are the {persona.name}, a {persona.role_title}.\n\n"
f"## Role\n{r.role}\n\n"
f"## Style\n{r.style}\n\n"
f"## Constraints\n{constraints}\n\n"
f"## Expected Output\n{outputs}"
)
Step 3: Create a LangChain Agent¶
Use the system prompt as the LangChain agent's system message.
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Build the prompt template
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# Create the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
# Create the agent (no tools in this example)
agent = create_openai_functions_agent(llm, tools=[], prompt=prompt)
executor = AgentExecutor(agent=agent, tools=[], verbose=True)
# Run the agent
result = executor.invoke({"input": "Analyze the authentication patterns in our API"})
print(result["output"])
Step 4: Multi-Persona Pipeline¶
The real power of FCC integration comes from chaining multiple personas. Use the cross-reference matrix to determine the pipeline topology.
from fcc.personas.cross_reference import CrossReferenceMatrix
matrix = CrossReferenceMatrix.from_yaml(get_personas_dir() / "cross_reference.yaml")
# Find the downstream handoffs from the Research Crafter
handoffs = [e for e in matrix.downstream("RC") if e.relationship_type == "handoff"]
for h in handoffs:
print(f"RC -> {h.target_id}: {h.interaction}")
Output:
RC -> BC: Delivers research inventory; clarifies context
RC -> DE: Provides annotated references for cross-linking
RC -> RB: Flags operational scenarios for automation
RC -> UG: Compiles user pain points for onboarding guides
RC -> TS: Provides requirements for traceability linking
Now build a two-persona chain where the Research Crafter's output feeds into the Blueprint Crafter:
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
# Load both personas
rc = registry.get("RC")
bc = registry.get("BC")
rc_prompt = build_persona_system_prompt(rc)
bc_prompt = build_persona_system_prompt(bc)
# Research Crafter chain
rc_chain = (
ChatPromptTemplate.from_messages([
("system", rc_prompt),
("human", "Research the following topic:\n\n{topic}"),
])
| ChatOpenAI(model="gpt-4o", temperature=0.7)
| StrOutputParser()
)
# Blueprint Crafter chain (receives RC output)
bc_chain = (
ChatPromptTemplate.from_messages([
("system", bc_prompt),
("human", "Create a blueprint from this research:\n\n{research}"),
])
| ChatOpenAI(model="gpt-4o", temperature=0.7)
| StrOutputParser()
)
# Execute the pipeline
research_output = rc_chain.invoke({"topic": "API rate limiting strategies"})
blueprint_output = bc_chain.invoke({"research": research_output})
print("--- Research Output ---")
print(research_output[:500])
print("\n--- Blueprint Output ---")
print(blueprint_output[:500])
Step 5: Add Constraints as Validation¶
FCC constraints can serve as post-processing validation. After an agent produces output, check it against the persona's constraint list:
def validate_against_constraints(output: str, persona) -> list[str]:
"""Check output against R.I.S.C.E.A.R. constraints. Returns violations."""
violations = []
for constraint in persona.riscear.constraints:
# In production, use an LLM-based evaluator here.
# This is a simplified example.
if "version-controlled" in constraint.lower() and "version" not in output.lower():
violations.append(f"Constraint may be violated: {constraint}")
return violations
violations = validate_against_constraints(blueprint_output, bc)
if violations:
print("Constraint violations detected:")
for v in violations:
print(f" - {v}")
Complete Example: Three-Persona FCC Pipeline¶
This example chains the Research Crafter (Find), Blueprint Crafter (Create), and Documentation Evangelist (Critique) into a full FCC cycle:
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
from fcc.simulation.prompts import build_persona_system_prompt
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
def make_chain(persona_id, input_label):
persona = registry.get(persona_id)
system = build_persona_system_prompt(persona)
return (
ChatPromptTemplate.from_messages([
("system", system),
("human", f"Process this {input_label}:\n\n{{content}}"),
])
| llm
| StrOutputParser()
)
# Build the pipeline
find_chain = make_chain("RC", "research request")
create_chain = make_chain("BC", "research inventory")
critique_chain = make_chain("DE", "blueprint for review")
# Execute the full FCC cycle
topic = "Microservice authentication with OAuth 2.0 and API gateways"
research = find_chain.invoke({"content": topic})
blueprint = create_chain.invoke({"content": research})
critique = critique_chain.invoke({"content": blueprint})
print(critique)
Tips for Production Use¶
- Use the full system prompt. The
build_persona_system_prompt()function includes Discernment Matrix traits and Design Target Factors, which improve behavioral consistency. - Map expected_output to output parsers. The
riscear.expected_outputlist tells you what format the agent should produce. Use LangChain structured output parsers to enforce this. - Wire cross-references as message routes. The cross-reference matrix
downstream()andupstream()methods directly map to LangChain'sRunnablePassthroughand branching patterns. - Version-pin your personas. Lock the FCC package version in production to ensure persona definitions do not change between runs.
Related Resources¶
- Integration Overview -- General integration pattern
- CrewAI Integration -- Alternative framework with role-based agents
- AutoGen Integration -- Conversable agent approach
- R.I.S.C.E.A.R. Specification -- Full specification reference