AI Providers¶
FCC supports multiple AI provider backends. Each one is selectable via the
AIClient class, the FCC_DEFAULT_PROVIDER environment variable, or
per-scenario ai_config overrides.
Provider matrix (v1.1.0)¶
| Provider | Type | Auto-detection trigger | Default model | Use case |
|---|---|---|---|---|
| anthropic | Hosted | ANTHROPIC_API_KEY set |
claude-sonnet-4-6 |
Production hosted Claude |
| openai | Hosted | OPENAI_API_KEY set |
gpt-4o |
Production hosted GPT |
| azure_openai | Hosted | AZURE_OPENAI_API_KEY set |
gpt-4o (deployment-scoped) |
Enterprise Azure |
| ollama | Local | OLLAMA_BASE_URL set |
llama3.2:latest |
Local development, privacy-first |
| litellm | Universal | LITELLM_DEFAULT_MODEL set |
ollama/llama3.2 |
One client → 100+ backends |
| mock | Testing | (always available) | mock-model |
CI, deterministic tests, examples |
The two bold rows (ollama, litellm) ship as plugin packages
under plugins/fcc-{ollama,litellm}-plugin/. Install them with
pip install -e ./plugins/fcc-ollama-plugin or via
make install-dev.
Decision flowchart¶
flowchart TD
Start([What do you want?]) --> Q1{Hosted or local?}
Q1 -->|Hosted| Q2{Which vendor?}
Q1 -->|Local| Q3{Single user or team?}
Q1 -->|Both / mix| LiteLLM[Use LiteLLM]
Q2 -->|Anthropic| A[Set ANTHROPIC_API_KEY]
Q2 -->|OpenAI| O[Set OPENAI_API_KEY]
Q2 -->|Azure| Az[Set AZURE_OPENAI_API_KEY + ENDPOINT]
Q3 -->|Single user| Ollama[Use Ollama]
Q3 -->|Team / production| Q4{High concurrency?}
Q4 -->|Yes| vLLM[Use vLLM v1.1.1+ via LiteLLM]
Q4 -->|No| Ollama
A --> Done([Done])
O --> Done
Az --> Done
Ollama --> Done
LiteLLM --> Done
vLLM --> Done
Provider routing internals¶
sequenceDiagram
participant App as Your code
participant Client as AIClient
participant Reg as PluginRegistry
participant Plugin as Provider plugin
participant Backend as Real backend
App->>Client: AIClient(provider="ollama")
Client->>Reg: get_plugins(PROVIDERS)
Reg-->>Client: [OllamaPlugin, LiteLLMPlugin, ...]
Client->>Plugin: get_provider_id() == "ollama"?
Plugin-->>Client: yes → get_client_class()
Client->>Client: instantiate OllamaClient
App->>Client: complete(messages, model)
Client->>Plugin: OllamaClient.complete(...)
Plugin->>Backend: openai SDK call (custom base_url)
Backend-->>Plugin: ChatCompletion
Plugin-->>Client: AIResponse(provider=OLLAMA)
Client-->>App: AIResponse
When you construct AIClient(provider="ollama", plugin_registry=registry),
the lookup flow is:
- Plugin lookup first —
_get_plugin_providers()queries the registeredAIProviderPlugininstances for one whoseget_provider_id()returns"ollama". - Built-in fallback — if no plugin claims the id, fall back to
_BUILTIN_PROVIDERS(which holds Anthropic / OpenAI / Azure / Mock). - MockAIClient as final fallback — if neither path resolves the id,
return a
MockAIClientso the simulation engine never crashes.
Plugins always win over built-ins with the same id — this is how a future plugin could provide an enhanced Anthropic client without breaking the built-in.
LiteLLM fan-out¶
LiteLLM is a single FCC plugin that proxies to 100+ backends via the
provider/model model-string convention:
flowchart LR
Sim[FCC simulation] --> Client[LiteLLMClient]
Client --> LiteLLM[litellm.completion]
LiteLLM -->|anthropic/...| A[Anthropic API]
LiteLLM -->|openai/...| O[OpenAI API]
LiteLLM -->|azure/...| Az[Azure OpenAI]
LiteLLM -->|bedrock/...| B[AWS Bedrock]
LiteLLM -->|vertex_ai/...| V[Google Vertex AI]
LiteLLM -->|gemini/...| G[Gemini API]
LiteLLM -->|ollama/...| Ol[Ollama localhost]
LiteLLM -->|litellm_proxy/...| vL[vLLM self-hosted]
LiteLLM -->|huggingface/...| HF[HuggingFace Inference]
LiteLLM -->|cohere/...| C[Cohere]
LiteLLM -->|together_ai/...| T[Together AI]
LiteLLM -->|... 90 more| etc[100+ backends]
One plugin install (pip install fcc-litellm-plugin) gives you access
to every backend in the LiteLLM provider list — and switching backends
is a one-line env var change.
Auto-detection rules¶
When you construct AIClient without an explicit provider:
from fcc.simulation.ai_client import AIClient
client = AIClient() # Auto-detects based on environment
The detection order (first match wins) is:
FCC_DEFAULT_PROVIDERenv var (if set to a non-mockvalue)ANTHROPIC_API_KEY→anthropicOPENAI_API_KEY→openaiAZURE_OPENAI_API_KEY→azure_openai- Plugin providers whose env-var hint is set (e.g.
OLLAMA_BASE_URL,LITELLM_DEFAULT_MODEL) mockas the final fallback
Important: plugins are NOT probed. Even if Ollama is running on
localhost:11434, FCC will not select it unlessOLLAMA_BASE_URLis explicitly set in your environment. This conservative behavior was a deliberate v1.1.0 decision to prevent the framework from hijacking the mock fallback for users who happen to have a local LLM installed.
Per-scenario provider override¶
Scenarios can pin a specific provider/model regardless of the global default:
{
"scenarios": [
{
"id": "EXP-001",
"name": "Local model experiment",
"type": "ai",
"description": "...",
"objectives": ["Compare local vs hosted output quality"],
"setup": {
"initial_input": "...",
"start_node": "RC",
"personas_involved": ["RC", "BC", "DE"],
"ai_config": {
"provider": "ollama",
"model": "llama3.2:latest",
"temperature": 0.7,
"max_tokens": 1024
}
},
"validation_rules": []
}
]
}
The simulation engine builds a fresh AIClient for the named provider
when ai_config.provider is present.
Programmatic use¶
from fcc.simulation.ai_client import AIClient
# Explicit provider
client = AIClient(provider="ollama")
# With caching enabled (per-call response cache on disk)
client = AIClient(provider="anthropic", use_cache=True, cache_dir=".cache/")
# Plugin discovery (the registry must be passed in for plugin lookups)
from fcc.plugins.registry import PluginRegistry
registry = PluginRegistry()
registry.discover()
client = AIClient(provider="litellm", plugin_registry=registry)
See also¶
- Ollama walkthrough — install Ollama, pull a model, use it from FCC
- LiteLLM walkthrough — universal routing for 100+ backends
docs/deployment/— running FCC in containers- CHANGELOG: v1.1.0 — provider auto-detection changes from v1.0.x