Integration Patterns Guide¶
This guide describes patterns for integrating FCC with external tools, frameworks, CI/CD systems, observability platforms, and deployment environments.
LangChain Integration¶
FCC personas can serve as specialized agents within a LangChain pipeline. Map each persona's R.I.S.C.E.A.R. specification to a LangChain agent configuration.
Persona as LangChain Agent¶
from langchain.agents import AgentExecutor
from langchain.chat_models import ChatOpenAI
from fcc.personas.registry import PersonaRegistry
from fcc._resources import get_personas_dir
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
persona = registry.get("RC")
# Build system prompt from R.I.S.C.E.A.R.
system_prompt = f"""You are {persona.name}.
Role: {persona.riscear.role}
Style: {persona.riscear.style}
Constraints: {'; '.join(persona.riscear.constraints)}
Expected Output: {persona.riscear.expected_output}
"""
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
# Use system_prompt as the agent's system message
FCC Workflow as LangChain Chain¶
Map FCC workflow nodes to LangChain chain steps. Each node in the workflow graph becomes a step in the chain, with the persona's R.I.S.C.E.A.R. specification driving the prompt at each step.
Knowledge Graph as LangChain Tool¶
Expose the FCC knowledge graph as a LangChain tool for structured entity lookup:
from fcc.knowledge.builders import build_full_fcc_graph
graph = build_full_fcc_graph(persona_registry=registry)
def fcc_graph_lookup(query: str) -> str:
"""Look up FCC entities in the knowledge graph."""
results = []
for node in graph.all_nodes():
if query.lower() in node.label.lower():
results.append(f"{node.node_type.value}: {node.label}")
return "\n".join(results[:10]) if results else "No results found"
CrewAI Integration¶
FCC workflows map naturally to CrewAI crews, with personas as agents and workflow actions as tasks.
Mapping FCC to CrewAI¶
| FCC Concept | CrewAI Concept | How to Map |
|---|---|---|
| PersonaSpec | Agent | Use R.I.S.C.E.A.R. for role, goal, backstory |
| WorkflowAction | Task | Use action description and execution_steps |
| FCC Phase (Find/Create/Critique) | Process stages | Group tasks by phase |
| CrossReferenceMatrix | Agent interactions | Define collaboration rules |
Example¶
from crewai import Agent, Task, Crew
from fcc.personas.registry import PersonaRegistry
registry = PersonaRegistry.from_yaml_directory(get_personas_dir())
# Build CrewAI agents from FCC personas
agents = []
for persona in registry.by_category("core"):
agent = Agent(
role=persona.riscear.role,
goal=persona.riscear.expected_output,
backstory=persona.riscear.archetype,
)
agents.append(agent)
CI/CD Integration¶
GitHub Actions¶
# .github/workflows/fcc-validate.yml
name: FCC Validation
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install FCC
run: pip install -e ".[dev]"
- name: Validate personas
run: fcc validate --dir src/fcc/data/personas/
- name: Run tests
run: pytest tests/ -v --cov=src/fcc --cov-fail-under=99
- name: Lint
run: ruff check src/ tests/
- name: Generate docs
run: fcc generate-docs --dir /tmp/docs --personas all
GitLab CI¶
# .gitlab-ci.yml
stages:
- validate
- test
- docs
validate-personas:
stage: validate
image: python:3.12
script:
- pip install -e ".[dev]"
- fcc validate --dir src/fcc/data/personas/
test:
stage: test
image: python:3.12
script:
- pip install -e ".[dev]"
- pytest tests/ -v --cov=src/fcc --cov-report=xml
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
generate-docs:
stage: docs
image: python:3.12
script:
- pip install -e ".[dev]"
- fcc generate-docs --dir public/ --personas all
artifacts:
paths:
- public/
Pre-commit Hook¶
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: fcc-validate
name: Validate FCC Personas
entry: fcc validate --dir src/fcc/data/personas/
language: python
pass_filenames: false
files: "src/fcc/data/personas/.*\\.yaml$"
Observability Platforms¶
OpenTelemetry¶
FCC includes optional OpenTelemetry integration. Enable it by installing opentelemetry-api and opentelemetry-sdk:
from fcc.observability.integration import (
instrument_simulation_engine,
instrument_action_engine,
)
from fcc.observability.tracing import FccTracer
from fcc.observability.exporters import JsonFileSpanExporter
tracer = FccTracer(service_name="fcc-production")
# Instrument engines
instrument_simulation_engine(tracer)
instrument_action_engine(tracer)
# Export spans to JSON
exporter = JsonFileSpanExporter(output_dir="/var/log/fcc/spans/")
Datadog¶
Forward FCC metrics to Datadog via the OpenTelemetry Collector:
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
exporters:
datadog:
api:
key: "${DD_API_KEY}"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [datadog]
Prometheus¶
Expose FCC metrics as Prometheus-compatible endpoints:
from fcc.observability.metrics import FccMetrics
metrics = FccMetrics()
metrics.record("simulation_duration_ms", 1250.0)
metrics.record("personas_loaded", 102)
# Export metrics snapshot
snapshot = metrics.snapshot()
# Convert to Prometheus format in your metrics server
IDE Integration¶
VS Code Tasks¶
// .vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "FCC: Validate Personas",
"type": "shell",
"command": "fcc validate --dir src/fcc/data/personas/",
"group": "build",
"problemMatcher": []
},
{
"label": "FCC: Run Tests",
"type": "shell",
"command": "pytest tests/ -v --tb=short",
"group": "test"
},
{
"label": "FCC: Generate Docs",
"type": "shell",
"command": "fcc generate-docs --dir docs/ --personas all",
"group": "build"
},
{
"label": "FCC: Dashboard",
"type": "shell",
"command": "fcc dashboard ecosystem",
"group": "none"
}
]
}
IntelliJ / PyCharm Run Configurations¶
Create run configurations for common FCC operations:
- Validate: Script path = module
fcc.scaffold.cli, Parameters =validate --dir src/fcc/data/personas/ - Test: pytest configuration targeting
tests/ - Dashboard: Script path = module
fcc.scaffold.cli, Parameters =dashboard ecosystem
Docker Deployment¶
Dockerfile¶
FROM python:3.12-slim
WORKDIR /app
COPY pyproject.toml .
COPY src/ src/
RUN pip install --no-cache-dir .
# Optional: install AI dependencies
# RUN pip install --no-cache-dir ".[ai]"
EXPOSE 8080 8765
CMD ["fcc", "--help"]
Docker Compose for Full Stack¶
version: "3.9"
services:
fcc-api:
build: .
ports:
- "8080:8080"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ./data:/app/data
fcc-ws-bridge:
build: .
command: python -m fcc.protocols.ws_bridge
ports:
- "8765:8765"
fcc-frontend:
build:
context: ./frontend
ports:
- "3000:3000"
depends_on:
- fcc-ws-bridge
Kubernetes Deployment¶
Deployment Manifest¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: fcc-agent-team
spec:
replicas: 2
selector:
matchLabels:
app: fcc-agent-team
template:
metadata:
labels:
app: fcc-agent-team
spec:
containers:
- name: fcc
image: fcc-agent-team:latest
ports:
- containerPort: 8080
env:
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: fcc-secrets
key: anthropic-api-key
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: fcc-agent-team
spec:
selector:
app: fcc-agent-team
ports:
- port: 8080
targetPort: 8080
ConfigMap for Persona Data¶
apiVersion: v1
kind: ConfigMap
metadata:
name: fcc-persona-config
data:
custom_personas.yaml: |
personas:
- id: CUSTOM
name: Custom Persona
category: core
fcc_phase: Create
API Gateway Integration¶
Nginx Reverse Proxy¶
upstream fcc_backend {
server fcc-api:8080;
}
upstream fcc_websocket {
server fcc-ws-bridge:8765;
}
server {
listen 80;
location /api/fcc/ {
proxy_pass http://fcc_backend/;
}
location /ws/ {
proxy_pass http://fcc_websocket/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
MCP Server as API Endpoint¶
Expose the MCP server behind an API gateway so external LLM hosts can discover FCC tools:
from fcc.protocols.mcp.server import FccMcpServer
server = FccMcpServer()
# List available tools for API discovery
tools = server.list_tools()
resources = server.list_resources()
prompts = server.list_prompts()
# These can be served as JSON from an HTTP endpoint
See Also¶
- Performance Tuning Guide -- Optimization strategies
- Architecture -- System architecture overview
- Extension Guide -- Building custom plugins
- Protocol Bridge Patterns -- Cross-protocol event routing