Skip to content

Multi-Team Governance

This guide addresses the governance problem that emerges when FCC adoption spreads from one team to several. Each team accumulates its own workflow customizations, persona rosters, quality thresholds, and documentation conventions. Without deliberate coordination, divergence compounds until cross-team collaboration becomes painful.

The guide presents three governance models, a set of RACI patterns, persona-sharing strategies, multi-team quality gates, and conflict-resolution protocols. Pick the model that matches your organization's culture before layering on the specific patterns.


Governance Models

There is no single correct governance model for multi-team FCC adoption. The right choice depends on organizational structure, trust levels, and the cost of misalignment. The three common models are federated, hierarchical, and mesh.

Federated Model

Each team operates autonomously within a shared constitution. A lightweight coordination body defines global standards (persona naming, quality gate thresholds, ADR format) but leaves workflow choices to individual teams.

When federated works

  • Teams have high trust and similar maturity
  • Domains differ significantly (payments vs search vs ML)
  • Leadership values team autonomy over uniformity
  • Coordination costs matter more than consistency

Strengths: high team autonomy, fast local decisions, domain-appropriate customization.

Weaknesses: drift accumulates, cross-team handoffs require translation, onboarding across teams is slower.

Hierarchical Model

A central platform team owns the canonical FCC configuration. Product teams consume pre-built persona rosters, workflow templates, and governance packages. Customization requires platform-team approval.

When hierarchical works

  • Compliance or regulatory requirements demand uniformity
  • Teams have mixed maturity and need guardrails
  • Organization has strong platform/product split
  • Cost of inconsistency is high (shared infrastructure, audits)

Strengths: strong consistency, clear ownership of standards, easier audits and onboarding.

Weaknesses: platform team becomes a bottleneck, slow to adapt to domain-specific needs, risk of producer-consumer resentment.

Mesh Model

Teams adopt FCC organically, share patterns through communities of practice, and escalate conflicts to a rotating council. No central authority -- just peer influence and documented agreements.

When mesh works

  • Strong engineering culture with senior engineers
  • High documentation discipline
  • Teams value peer learning over top-down mandates
  • Organization tolerates some permanent inconsistency

Strengths: organic evolution, patterns emerge bottom-up, high senior engineer engagement.

Weaknesses: slow convergence, depends on strong participation, struggles with clear accountability.

Model Comparison

Dimension Federated Hierarchical Mesh
Decision speed Fast local, slow global Fast centrally, slow locally Slow across board
Consistency Medium High Low-medium
Autonomy High Low Highest
Onboarding cost Medium Low High
Audit readiness Medium High Low
Best team count 3-15 5-50+ 3-10
Culture fit Trust + similar maturity Mixed maturity Strong senior IC culture

Three-Team Governance Structure (Example)

Below is a concrete example of federated governance with three product teams sharing a platform team, coordinated by a lightweight council.

flowchart TD
    Council[Governance Council<br/>weekly 30-min<br/>1 rep per team]

    Council -->|sets standards| Platform[Platform Team<br/>owns FCC config,<br/>shared personas]

    Platform -->|provides| TeamA[Team Alpha<br/>Product: Payments]
    Platform -->|provides| TeamB[Team Beta<br/>Product: Search]
    Platform -->|provides| TeamC[Team Gamma<br/>Product: Analytics]

    TeamA -->|escalates| Council
    TeamB -->|escalates| Council
    TeamC -->|escalates| Council

    subgraph Shared[Shared Assets]
        SP[Shared Personas:<br/>GCA, SMC, DGS, CO]
        QG[Shared Quality Gates]
        ADR[ADR Template]
        KG[Federated KG]
    end

    Platform -.maintains.-> Shared
    TeamA -.consumes.-> Shared
    TeamB -.consumes.-> Shared
    TeamC -.consumes.-> Shared

    classDef council fill:#e1f5ff,stroke:#0277bd,stroke-width:3px;
    classDef platform fill:#fff3e0,stroke:#e65100,stroke-width:2px;
    classDef team fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px;
    classDef shared fill:#fce4ec,stroke:#880e4f,stroke-width:1px;
    class Council council;
    class Platform platform;
    class TeamA,TeamB,TeamC team;
    class SP,QG,ADR,KG shared;

In this structure, the Council meets 30 minutes weekly. Each team sends one rotating representative. The Council does not make design decisions -- it sets cross-cutting standards and resolves escalations when teams disagree on shared assets.


Shared Responsibility Matrices (RACI)

A RACI matrix clarifies who is Responsible, Accountable, Consulted, and Informed for each activity that spans multiple teams. Below are three common multi-team activities with recommended RACI assignments.

RACI 1: Shared Persona Updates

When one team proposes a change to a shared persona (for example, tightening GCA's constitution), who decides?

Activity Product Teams Platform Team Governance Council Compliance
Propose persona change R C I C
Evaluate impact C R A C
Approve change I R A C
Implement change I R I I
Communicate to teams C R A I

A = Accountable (final decision), R = Responsible (does the work), C = Consulted, I = Informed.

RACI 2: Cross-Team Workflow Changes

When a workflow crosses team boundaries (e.g., Payments uses Search's SLO output), who owns changes?

Activity Upstream Team Downstream Team Platform Team Council
Propose interface change R C C I
Assess breaking impact C R C I
Approve change A C I C
Implement change R C I I
Migrate downstream C R C I
Deprecate old interface R I C I

RACI 3: Compliance Audit Findings

When a compliance audit produces findings that touch multiple teams:

Activity Finding Owner Team Compliance Team Platform Team Exec Sponsor
Triage finding C R I I
Root cause analysis R C C I
Remediation plan R A C C
Execute remediation R C C I
Validate closure C R C A

RACI hygiene

  • Each activity should have exactly one A (Accountable)
  • Too many Rs on one row means diffuse responsibility
  • Too many Cs slow decisions without improving quality
  • Revisit RACI quarterly -- team ownership shifts naturally

Cross-Team Persona Sharing

Some personas make sense per-team (e.g., Research Crafter for domain research). Others make sense as shared services (e.g., Governance Compliance Auditor for organization-wide policy).

Shared Persona Candidates

Persona Why Shared Who Owns
GCA Consistent policy enforcement Compliance team
SMC Program-increment alignment Program management
DGS Data handling standards Data platform team
PCA Protocol compliance Platform team
JDA Cross-organization JV Strategy team

Per-Team Persona Candidates

Persona Why Per-Team Rationale
RC Domain-specific research Payments research differs from search research
BC Team architecture style Each team has its own design patterns
DE Team voice/style Documentation voice is team-specific
CO Team rituals Handoff protocols differ by team

Shared Persona Governance Pattern

# Platform team defines the shared persona config
# /shared-personas/gca.yaml (version-controlled, signed)

from fcc.api import PersonaRegistry

# Each product team loads shared + local personas
registry = PersonaRegistry()
registry.load_from_dir("/shared-personas/")  # shared
registry.load_from_dir("/team-alpha/personas/")  # local

# Shared personas override-protected
gca = registry.get("GCA")
assert gca.doc_context["ownership"] == "platform-team"

Constitution Tiering for Shared Personas

Shared personas use 3-tier governance:

  • Hard-stop clauses -- teams cannot override (e.g., "no PII in logs")
  • Mandatory clauses -- teams can tighten but not relax
  • Preferred clauses -- teams can override with documented justification

Document tier assignments in the shared persona YAML so teams know what they can customize.


Quality Gates for Multi-Team Deliverables

When a deliverable crosses teams, quality gates must be aggregated and layered, not independently applied.

Layered Gate Pattern

flowchart LR
    Deliv[Multi-Team<br/>Deliverable] --> L1{Team Gate<br/>Tier 1}
    L1 -->|pass| L2{Cross-Team<br/>Gate Tier 2}
    L2 -->|pass| L3{Platform/<br/>Compliance Gate<br/>Tier 3}
    L3 -->|pass| Done[Shipped]

    L1 -->|fail| Revise1[Revise in Team]
    L2 -->|fail| Revise2[Revise at Interface]
    L3 -->|fail| Revise3[Revise Platform Concerns]

    Revise1 --> L1
    Revise2 --> L1
    Revise3 --> L1

    classDef gate fill:#fff9c4,stroke:#f57f17,stroke-width:2px;
    classDef result fill:#c8e6c9,stroke:#2e7d32;
    classDef revise fill:#ffebee,stroke:#b71c1c;
    class L1,L2,L3 gate;
    class Done result;
    class Revise1,Revise2,Revise3 revise;

Each tier has different accountable personas:

Tier Personas Concern
Tier 1 (Team) BV, team lead Local correctness
Tier 2 (Cross-Team) SIA, CO Interface integrity
Tier 3 (Platform) GCA, DGS, PCA Org-wide standards

Aggregation Rules

When multiple teams contribute to one deliverable, aggregate their quality scores as follows:

  • Minimum rule -- deliverable score = min(team scores). Conservative; any team's issue blocks release.
  • Weighted average -- deliverable score = sum(team_score * contribution_weight). Pragmatic; tolerates minor local gaps.
  • Any-fail rule -- deliverable fails if any team fails mandatory gates. Safe for regulated contexts.

Avoid average-only

Simple averaging hides localized quality failures. One team scoring 1/5 and four teams scoring 5/5 still averages to 4.2/5, but the 1/5 is a release blocker.


Conflict Resolution Patterns

Disagreements between teams are inevitable. The goal is not to prevent conflict but to resolve it quickly with minimal damage to relationships.

Pattern 1: Documented Disagree-and-Commit

When teams cannot reach consensus on a shared decision:

  1. Each team publishes a position document (max 1 page) with rationale.
  2. Governance Council reviews positions in the weekly meeting.
  3. Council picks a direction; losing team documents disagreement in a DACI record.
  4. Decision is reviewed at 90-day checkpoint with outcome data.

Pattern 2: Timeboxed Escalation Ladder

flowchart TD
    IC1[IC to IC<br/>2 business days] -->|unresolved| Lead1[Lead to Lead<br/>2 business days]
    Lead1 -->|unresolved| Mgr[Manager to Manager<br/>3 business days]
    Mgr -->|unresolved| Council[Governance Council<br/>next weekly meeting]
    Council -->|unresolved| Exec[Executive Sponsor<br/>1 business day]

    classDef level fill:#e8eaf6,stroke:#283593,stroke-width:2px;
    class IC1,Lead1,Mgr,Council,Exec level;

Each level has a fixed timebox. If unresolved, escalation is automatic -- not a judgment on the participants. This removes blame from escalation.

Pattern 3: Persona-Mediated Arbitration

Use Collaboration Orchestrator (CO) and Blueprint Validator (BV) to mediate technical disagreements.

  1. CO convenes both teams with a structured agenda.
  2. Each side presents position and evidence (10 minutes each).
  3. BV scores positions on rubric: correctness, maintainability, cost, risk.
  4. If BV scores diverge by >1 point, proceed with higher-scored option with caveats.
  5. If within 1 point, default to simpler option.

Common Conflict Sources

Conflict Resolution Pattern
"Your interface breaks ours" Timeboxed Escalation + DACI
"We need a different GCA constitution" Council decision, tier analysis
"Our workflow doesn't fit the template" Persona-Mediated Arbitration
"You changed the shared persona without asking" RACI review, process fix
"We cannot meet the shared quality gate" Tier 2 gate adjustment, council review

Federation Tooling

FCC's federation module provides tooling to track cross-team ownership, namespaces, and change history.

from fcc.api import EventBus, NamespaceRegistry, FederationRegistry

# Register each team's namespace
ns = NamespaceRegistry()
ns.register("team-alpha", "payments", "https://payments.internal/")
ns.register("team-beta", "search", "https://search.internal/")
ns.register("team-gamma", "analytics", "https://analytics.internal/")

# Federation registry tracks cross-team entities
fed = FederationRegistry(namespace_registry=ns)

# Track a shared persona across teams
fed.resolve_entity("GCA", source_namespace="platform")

Use the federation registry when:

  • Teams share personas and need to track usage
  • Cross-team change impact assessment is required
  • Compliance needs to know which teams use which regulated personas

Governance Anti-Patterns

Multi-team governance failure modes

The Ivory Tower -- Platform team defines all standards without product-team input. Standards become disconnected from reality. Teams work around them covertly.

The Committee of Everything -- Every decision goes to the Council. Decision velocity crashes. Teams disengage.

The Shadow Duplicate -- Each team builds its own GCA, its own quality gates, its own templates. No sharing, no reuse, inconsistent audits.

The Perpetual Pilot -- Governance decisions never harden into binding standards. Everything is "being evaluated" indefinitely.

The Absent Sponsor -- No executive owns the multi-team governance. When conflicts escalate, nothing happens.


Getting Started With Multi-Team Governance

  1. Pick a governance model -- federated, hierarchical, or mesh.
  2. Designate an accountable role -- platform lead, principal engineer, or director.
  3. Publish the initial RACI -- start with shared personas and cross-team workflows.
  4. Run the council for 8 weeks -- observe what escalates before formalizing.
  5. Instrument shared assets -- use event bus to track usage.
  6. Hold quarterly retros -- governance models need periodic tuning.

Next Steps