Grading Rubrics for FCC Assignments¶
Six ready-to-use rubrics for common FCC-based assignments. Each rubric scores 4-5 dimensions across 4 levels (Emerging, Developing, Proficient, Exemplary). Use them directly or adapt to your course weighting.
These rubrics align with the 30+ quality gates defined in
src/fcc/data/governance/quality_gates.yaml and the assessment rubrics in
src/fcc/data/docs/assessment_rubrics.yaml.
Scoring convention¶
- Emerging (1): significant gaps; revision required
- Developing (2): functional but missing depth
- Proficient (3): meets all stated requirements
- Exemplary (4): exceeds requirements with insight or polish
Sum dimensions for a raw score; convert to letter grade per course policy.
Rubric 1: Custom Persona YAML¶
Use after weeks 3-4 of the 12-week curriculum.
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| R.I.S.C.E.A.R. completeness | Missing 3+ of the 10 components | Missing 1-2 components or one is shallow | All 10 present and non-trivial | All 10 present; each reflects domain expertise and cross-references literature |
| Role clarity | Role overlaps existing persona or is vague | Role is distinct but scope is fuzzy | Clearly differentiated role with testable boundaries | Role occupies a previously underserved niche with rationale |
| Quality-gate alignment | No gates wired; schema fails | Passes schema but no gates referenced | All applicable quality gates referenced in responsibilities |
New quality-gate candidate proposed with justification |
| Collaborator graph | No collaborators listed | Flat list, no edge types | Typed collaborators with reasonable edge types | Collaborators form a coherent team; upstream/downstream roles justified |
| Adoption checklist | Generic or missing | Checklist present but vague | Concrete, actionable checklist matching role | Checklist is usable by a real team on day one |
Common pitfalls
- Missing
archetypeorrole_adoption_checklist(see common mistakes M6) - Category collision with the 20 core categories
- Responsibilities that are actually role_skills
Cross-links: quality_gates.yaml, custom-persona-design-guide.md.
Rubric 2: Scenario Authoring¶
Use after week 6.
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| Testability | No pass/fail criteria | Criteria stated but not checkable | Every criterion maps to a quality gate or metric | Criteria + expected ranges + tolerance |
| Coverage | One persona, one phase | All 3 phases but thin | All 3 phases with >= 3 distinct personas | Rich persona selection with justification per slot |
| Realism | Toy/contrived inputs | Plausible but generic | Grounded in a real domain problem | Real data + documented provenance |
| Reproducibility | No seed, no provider config | Seed set; provider unspecified | Seed + provider + model + temperature all set | Includes mock fallback for CI reproducibility |
| Workflow choice | No workflow reference | Default workflow only | Workflow variant chosen and justified | Custom workflow + explanation of node selection |
Common pitfalls
- Omitting
setup.ai_config(causes silent provider drift across runs) - Quality-gate ID typos
- Missing
inputs:block
Rubric 3: Plugin Development¶
Use after week 10.
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| Contract correctness | Doesn't load via entry point | Loads but wrong plugin_type | Correct type + all required methods | Type + methods + contract tests + idempotency |
| Tests | No tests | Smoke test only | Unit tests + >= 80% coverage | Unit + integration + failure-mode tests |
| Docs | No docstring, no README | Module docstring only | Full API docs + usage example | Docs + tutorial + troubleshooting section |
| Dependency hygiene | Adds unpinned core deps | Pinned but broad | Pinned, narrow, extras-only | Works with zero extras; optional deps guarded by import |
| Observability | No events, no traces | Some debug prints | Publishes relevant events | Events + traces + metrics + constitution tier |
Common pitfalls
- Entry point in
pyproject.tomlreferences wrong module path (see M14) - Forgetting
plugin_typeclass attribute (see M15) - Leaking credentials into logged events
Rubric 4: Workflow Visualization¶
Use after week 5 or week 11.
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| Architecture clarity | Boxes with no labels | Labeled but cluttered | Clean, readable, legend present | Layered diagram with phases clearly zoned |
| Graph correctness | Wrong node types | Correct nodes, wrong edges | Valid per workflow schema | Validates AND generates a diff against default variants |
| Node/edge fidelity | Fewer nodes than required | Required count met; generic | Each node has justification | Nodes tied to specific R.I.S.C.E.A.R. responsibilities |
| Phase zoning | FIND/CREATE/CRITIQUE not visible | Labeled phases | Phases color-coded and counted | Phases + sub-phases with transition rationale |
| Accessibility | No alt text, low contrast | Alt text present | Alt text + high contrast + legend | Full WCAG AA; text-only fallback included |
Rubric 5: Compliance Audit Report (EU AI Act Mapping)¶
Use after week 11 or in a dedicated compliance module.
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| Risk classification | Category not assigned | Assigned but not justified | Correct category + justification referencing Reg 2024/1689 articles | Classification with counter-scenarios and sensitivity analysis |
| Requirement mapping | < 50% of applicable requirements cited | 50-80% cited | All applicable cited with evidence | All cited + NIST AI RMF crosswalk + gap analysis |
| Evidence quality | Assertions without artifacts | Some artifacts | Every requirement has traceable evidence | Evidence graph built with compliance.evidence_graph |
| Remediation plan | None | Vague roadmap | Prioritized plan with owners | Plan + acceptance criteria + re-audit trigger |
| Reproducibility | Manual audit only | Semi-scripted | Uses CompliancePipeline |
Full automated audit integrated with CI |
Cross-links: src/fcc/data/compliance/eu_ai_act_requirements.yaml,
src/fcc/data/compliance/nist_ai_rmf_mapping.yaml.
Rubric 6: Capstone Project¶
Use at end of semester (week 12).
| Dimension | Emerging (1) | Developing (2) | Proficient (3) | Exemplary (4) |
|---|---|---|---|---|
| Integration breadth | 1-2 subsystems used | 3-4 subsystems | 5-6 subsystems (personas, workflows, plugin, event bus, KG, docs) | All required 7 subsystems + at least one advanced (RAG/federation/compliance) |
| Custom personas | Template-only | 2 custom personas | 3+ custom, well-specified | 3+ with dimension profiles and discernment matrix |
| Custom plugin | Non-functional | Functional, single type | Functional + tested + documented | Cross-plugin orchestration (2+ plugin types interacting) |
| Event-bus evidence | No events | Events fire but unfiltered | >= 5 event types published and consumed | Full subscriber with filtering, serialization, and replay demo |
| Presentation | Unclear demo | Demo works, narrative weak | Clear demo + architecture walkthrough | Publication-quality narrative + design decisions + lessons learned |
Scoring guidance: a Capstone should earn >= 3 on every dimension to be considered Proficient overall. Dimensions at 1 should block the grade until revised; this communicates that integration is the point of the capstone.
Common pitfalls (across all rubrics)¶
- Using mock mode for a live demo -- fine for CI, confusing on stage.
Require AI mode with
temperature=0for capstone demos. - No version pinning -- students should cite FCC version explicitly.
- Missing cross-references -- reward student work that links back to
docs/for-beginners/, guidebook chapters, and ADRs. - Uneven depth -- a beautiful persona plus a toy workflow is a 2, not a 3.
Related resources¶
- 12-week curriculum
- Assessment strategies
- Lab exercise bank
- Student workbook
src/fcc/data/governance/quality_gates.yamlsrc/fcc/data/docs/assessment_rubrics.yaml