Prompt Engineering Guide¶
This guide explains how the FCC framework generates prompts from R.I.S.C.E.A.R. specifications. You will learn the prompt construction pipeline, available customization points, and how to build multi-persona prompt chains.
The Prompt Construction Pipeline¶
FCC builds prompts through a well-defined pipeline that transforms structured persona data into LLM-ready system prompts. The pipeline is implemented in fcc.simulation.prompts.
Pipeline Overview¶
graph LR
YAML[Persona YAML] --> PS[PersonaSpec]
PS --> RISCEAR[RISCEARSpec]
PS --> DM[Discernment Matrix]
PS --> DTF[Design Target Factors]
PS --> DIM[Dimension Profile]
RISCEAR --> BUILD[build_persona_system_prompt]
DM --> BUILD
DTF --> BUILD
DIM --> BUILD
BUILD --> PROMPT[System Prompt]
PROMPT --> TEMPLATE[PromptTemplate]
TEMPLATE --> MESSAGES[LLM Messages]
Step 1: Load the Persona¶
The pipeline starts with a PersonaSpec loaded from YAML via the PersonaRegistry:
from fcc._resources import get_personas_dir
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
persona = registry.get("RC") # Research Crafter
Step 2: Build the System Prompt¶
The build_persona_system_prompt() function assembles all available R.I.S.C.E.A.R. components into a structured system prompt. The assembly order is intentional -- it follows the order in which LLMs process instructions most effectively:
- Identity header -- Name, ID, and role title
- Role -- What the persona does (primary behavioral anchor)
- Archetype -- The behavioral model (shapes personality)
- Style -- How the persona communicates (tone and format)
- Responsibilities -- What the persona is accountable for
- Constraints -- What the persona must NOT do (guardrails)
- Expected Output -- What deliverables to produce (success criteria)
- Skills -- Competencies the persona brings (capability context)
- Collaboration -- Who the persona works with (relationship awareness)
- Discernment traits -- Ethical and professional compass
- Design Target Factors -- Behavioral profile characteristics
- Persona Dimensions -- Detailed attribute values
from fcc.simulation.prompts import build_persona_system_prompt
system_prompt = build_persona_system_prompt(persona)
Step 3: Wrap in a PromptTemplate¶
The system prompt is wrapped in a PromptTemplate that adds variable substitution and rendering:
from fcc.simulation.prompts import get_prompt_for_persona_spec
template = get_prompt_for_persona_spec(persona)
# The template includes:
# - system_prompt: The full R.I.S.C.E.A.R. prompt
# - user_template: "As the {name}, process the following:\n\n{payload}\n\nContext:\n{history}"
Step 4: Render to Messages¶
The template renders into an LLM-ready message list:
messages = template.to_messages(
payload="Analyze the authentication module",
history="Previous research covered API endpoints",
)
# Returns:
# [
# {"role": "system", "content": "You are the Research Crafter (RC)..."},
# {"role": "user", "content": "As the Research Crafter, process..."},
# ]
Anatomy of a Generated Prompt¶
Here is the structure of a prompt generated for the Research Crafter, annotated with which R.I.S.C.E.A.R. component produces each section:
You are the Research Crafter (RC), a Senior Analyst. ← Identity (name, id, role_title)
## Role ← riscear.role
Senior analyst mapping capabilities and curating research
inventories. Gathers, organizes, and synthesizes all
relevant information at the start of a project.
## Archetype ← riscear.archetype
The Investigator
## Style ← riscear.style
Analytical, structured lists, annotated references,
version-controlled.
## Responsibilities ← riscear.responsibilities
- Curate automatable knowledge bases
- Bridge agent-human information gaps
- Map capabilities for architectural runway planning
- Maintain machine-parseable research inventories
## Constraints ← riscear.constraints
- Relevant to project scope
- No duplication of existing research
- Accessible in shared repository
- Tagged with capability IDs
## Expected Output ← riscear.expected_output
- Capability matrix (features, descriptions, sources)
- Research inventory with annotated references
- Traceability matrix (requirements mapped to evidence)
- Capability tags for aggregation and reporting
## Skills ← riscear.role_skills
- Research methodology and systematic literature review
- Data analysis and synthesis
- Knowledge curation and taxonomy design
## Collaboration ← riscear.role_collaborators
- Delivers research inventory to Blueprint Crafter (BC)
- Provides annotated references to Documentation Evangelist (DE)
## Discernment ← discernment_matrix
- Humility: Acknowledges biases in research methodology
- Curiosity: Explores multiple perspectives and sources
## Design Target Factors ← design_target_factors
- Optimism: Approaches research with constructive framing
- Leadership: Guides research direction for downstream teams
## Persona Dimensions ← dimension_profile
- Agent Profile: Documentation-focused research analyst
- Tool/Resource Adoption: YAML, JSON Schema, Python
Customization Points¶
Custom System Prompts¶
Override the default prompt builder for specific personas:
from fcc.simulation.prompts import PromptTemplate, DEFAULT_PERSONA_PROMPTS
# Replace the Research Crafter's default prompt
DEFAULT_PERSONA_PROMPTS["RC"] = PromptTemplate(
system_prompt="You are a research analyst specializing in security...",
user_template="Analyze the security implications of:\n\n{payload}",
model="gpt-4o",
temperature=0.5,
)
Selective Component Inclusion¶
Build a custom prompt that includes only certain R.I.S.C.E.A.R. components:
def build_minimal_prompt(persona) -> str:
"""Build a prompt using only role, style, and constraints."""
r = persona.riscear
constraints = "\n".join(f"- {c}" for c in r.constraints)
return (
f"You are the {persona.name}, a {persona.role_title}.\n\n"
f"## Role\n{r.role}\n\n"
f"## Style\n{r.style}\n\n"
f"## Constraints\n{constraints}"
)
def build_full_prompt(persona) -> str:
"""Build the complete prompt with all components."""
from fcc.simulation.prompts import build_persona_system_prompt
return build_persona_system_prompt(persona)
This selective approach is useful for ablation studies -- you can measure how removing specific components affects output quality.
Temperature and Model Configuration¶
Each PromptTemplate carries model configuration:
template = PromptTemplate(
system_prompt=system_prompt,
user_template="Process:\n\n{payload}",
model="gpt-4o", # Model selection
temperature=0.7, # Sampling temperature
max_tokens=1024, # Response length limit
)
Recommended temperature settings by FCC phase:
| Phase | Temperature | Rationale |
|---|---|---|
| Find | 0.7-0.9 | Encourages exploration and diverse research perspectives |
| Create | 0.5-0.7 | Balances creativity with structural consistency |
| Critique | 0.2-0.4 | Prioritizes precision and consistency in reviews |
Multi-Persona Prompt Chains¶
FCC's cross-reference matrix defines which personas communicate. Use this to build prompt chains where one persona's output feeds the next.
Sequential Chain¶
from fcc.simulation.prompts import get_prompt_for_persona_spec, build_context_from_history
# Step 1: Research Crafter produces research
rc = registry.get("RC")
rc_template = get_prompt_for_persona_spec(rc)
rc_messages = rc_template.to_messages(
payload="API authentication best practices",
history="",
)
# ... send to LLM, get response ...
rc_output = "Research findings about OAuth 2.0, API keys, and JWT tokens..."
# Step 2: Blueprint Crafter consumes research
bc = registry.get("BC")
bc_template = get_prompt_for_persona_spec(bc)
# Build context from the Research Crafter's output
history = [{"actor": "RC", "note": rc_output}]
context = build_context_from_history(history)
bc_messages = bc_template.to_messages(
payload=rc_output,
history=context,
)
Feedback Loop Chain¶
When the Critique phase sends feedback back to the Create phase:
# Step 3: Documentation Evangelist critiques the blueprint
de = registry.get("DE")
de_template = get_prompt_for_persona_spec(de)
# ... get critique output ...
de_output = "Blueprint needs more detail on error handling..."
# Step 4: Blueprint Crafter revises based on feedback
history.append({"actor": "DE", "note": de_output})
context = build_context_from_history(history)
revision_messages = bc_template.to_messages(
payload=f"Revise the blueprint based on this feedback:\n{de_output}",
history=context,
)
Variable Extraction¶
Use get_variables() to discover which variables a template requires:
template = get_prompt_for_persona_spec(persona)
variables = template.get_variables()
print(f"Required variables: {variables}")
# Output: Required variables: {'payload', 'history'}
Best Practices¶
- Always use the full prompt for production. The minimal prompts are useful for testing and ablation, but the full R.I.S.C.E.A.R. prompt produces the most consistent behavior.
- Include the history context. The
{history}variable provides conversation continuity across persona activations. Omitting it degrades multi-step workflows. - Respect the archetype. Each persona's archetype (Investigator, Architect, Editor, Operator, Guide, etc.) sets the behavioral tone. Custom prompts should not contradict the archetype.
- Use
format_ai_response_as_note()to truncate and clean LLM responses before passing them as history to downstream personas. This prevents context window overflow.
Related Resources¶
- Prompt Gallery -- Ready-to-use prompts for all 24 personas
- Domain-Specific Prompt Sets -- Adapted prompts for different contexts
- R.I.S.C.E.A.R. Specification -- The underlying specification format
- Understanding Personas -- How personas are structured