Skip to content

DevOps Deployment Chain

Duration: 60 minutes Difficulty: Intermediate Pattern: Sequential Chain

This scenario demonstrates a CI/CD pipeline for ML model deployment, from user story definition through pipeline construction, deployment automation, and monitoring setup.

Scenario Overview

Problem: A trained ML model needs to be deployed to production with a CI/CD pipeline, automated testing, deployment scripting, and observability. The deployment must be repeatable and auditable.

Goal: Execute a four-persona DevOps chain that produces a deployment specification, CI/CD pipeline, automation scripts, and monitoring configuration.

Persona Team

Persona ID Role Category
User Stories Specialist JUS Defines deployment requirements as user stories app_development
Pipeline Builder PBD Constructs CI/CD pipeline definitions devops
DevOps Engineer DVE Implements deployment infrastructure devops
Automation Scripter ASC Creates automation and monitoring scripts devops

Setup

from fcc.personas.registry import PersonaRegistry
from fcc.simulation.engine import SimulationEngine
from fcc.simulation.messages import SimulationMessage
from fcc.messaging.bus import EventBus
from fcc.messaging.events import Event, EventType

registry = PersonaRegistry.from_yaml_directory("src/fcc/data/personas")
bus = EventBus()
engine = SimulationEngine(registry=registry, mode="deterministic")

deployment_context = {
    "model": "churn_predictor_v2",
    "framework": "scikit-learn",
    "serving": "FastAPI + Docker",
    "target_env": "Kubernetes (GKE)",
    "latency_sla": "< 100ms p99",
    "availability_sla": "99.9%",
}

Phase 1: Deployment Requirements

The User Stories Specialist defines deployment requirements:

jus_message = SimulationMessage(
    sender="orchestrator",
    receiver="JUS",
    content=(
        f"Define user stories for deploying: {deployment_context['model']}\n"
        f"Framework: {deployment_context['framework']}\n"
        f"Serving: {deployment_context['serving']}\n"
        f"Target: {deployment_context['target_env']}\n"
        f"SLAs: {deployment_context['latency_sla']} latency, "
        f"{deployment_context['availability_sla']} availability\n\n"
        "Write user stories covering:\n"
        "- As a data scientist, I want automated model packaging\n"
        "- As an ML engineer, I want blue-green deployments\n"
        "- As an SRE, I want automated health checks\n"
        "- As a product owner, I want deployment rollback capability\n"
        "Include acceptance criteria for each story."
    ),
    phase="find",
)

user_stories = engine.step(jus_message)
print(f"User Stories: {len(user_stories.content)} chars")

Phase 2: CI/CD Pipeline Construction

The Pipeline Builder creates the pipeline definition:

pbd_message = SimulationMessage(
    sender="JUS",
    receiver="PBD",
    content=(
        f"Build a CI/CD pipeline based on these requirements:\n\n"
        f"{user_stories.content[:500]}\n\n"
        "Pipeline stages:\n"
        "1. Lint and unit tests (pytest, ruff)\n"
        "2. Model validation (input/output schema check)\n"
        "3. Integration tests (API endpoint tests)\n"
        "4. Container build (Docker multi-stage)\n"
        "5. Security scan (container + dependency)\n"
        "6. Staging deployment and smoke tests\n"
        "7. Production canary deployment\n"
        "8. Full production rollout\n\n"
        "Produce: GitHub Actions workflow YAML, stage definitions, "
        "and artifact specifications."
    ),
    phase="create",
)

pipeline_spec = engine.step(pbd_message)
print(f"Pipeline Spec: {len(pipeline_spec.content)} chars")

Phase 3: Deployment Infrastructure

The DevOps Engineer implements the deployment infrastructure:

dve_message = SimulationMessage(
    sender="PBD",
    receiver="DVE",
    content=(
        f"Implement the deployment infrastructure for:\n"
        f"Target: {deployment_context['target_env']}\n\n"
        f"Pipeline spec:\n{pipeline_spec.content[:500]}\n\n"
        "Implement:\n"
        "- Dockerfile (multi-stage, minimal image)\n"
        "- Kubernetes manifests (Deployment, Service, HPA)\n"
        "- Helm chart for parameterized deployment\n"
        "- Blue-green deployment strategy\n"
        "- Secrets management (external-secrets-operator)\n"
        "- Network policies and RBAC\n"
        "Produce: Infrastructure-as-Code specifications."
    ),
    phase="create",
)

infra_spec = engine.step(dve_message)
print(f"Infrastructure Spec: {len(infra_spec.content)} chars")

Phase 4: Automation and Monitoring

The Automation Scripter creates monitoring and automation:

asc_message = SimulationMessage(
    sender="DVE",
    receiver="ASC",
    content=(
        f"Create automation scripts and monitoring for:\n"
        f"Model: {deployment_context['model']}\n"
        f"Latency SLA: {deployment_context['latency_sla']}\n"
        f"Availability SLA: {deployment_context['availability_sla']}\n\n"
        f"Infrastructure:\n{infra_spec.content[:500]}\n\n"
        "Create:\n"
        "- Health check scripts (liveness, readiness probes)\n"
        "- Prometheus metrics exporter configuration\n"
        "- Grafana dashboard definitions (latency, throughput, errors)\n"
        "- Alert rules (SLA violations, model drift, error rate)\n"
        "- Automated rollback script (triggered by alert)\n"
        "- Log aggregation configuration (structured JSON logging)\n"
        "Produce: Monitoring-as-Code specifications."
    ),
    phase="create",
)

monitoring_spec = engine.step(asc_message)
print(f"Monitoring Spec: {len(monitoring_spec.content)} chars")

Deployment Readiness Assessment

from fcc.collaboration.scoring import ScoringEngine

scorer = ScoringEngine()

readiness_scores = {
    "requirements": scorer.score_text(user_stories.content),
    "pipeline": scorer.score_text(pipeline_spec.content),
    "infrastructure": scorer.score_text(infra_spec.content),
    "monitoring": scorer.score_text(monitoring_spec.content),
}

overall = sum(readiness_scores.values()) / len(readiness_scores)

print("\nDeployment Readiness:")
for area, score in readiness_scores.items():
    status = "READY" if score >= 0.6 else "NEEDS WORK"
    print(f"  {area}: {score:.2f} [{status}]")
print(f"  Overall: {overall:.2f}")

# Deployment decision
deployment_ready = all(s >= 0.5 for s in readiness_scores.values())
print(f"\nDeployment {'APPROVED' if deployment_ready else 'BLOCKED'}")

bus.publish(Event(
    event_type=EventType.COLLABORATION_GATE_DECIDED,
    source="devops_deployment",
    payload={
        "model": deployment_context["model"],
        "ready": deployment_ready,
        "scores": readiness_scores,
    },
))

Deployment Summary

import json

deployment_summary = {
    "model": deployment_context["model"],
    "target": deployment_context["target_env"],
    "pipeline_stages": 8,
    "personas_involved": ["JUS", "PBD", "DVE", "ASC"],
    "artifacts": {
        "user_stories": len(user_stories.content),
        "ci_cd_pipeline": len(pipeline_spec.content),
        "infrastructure": len(infra_spec.content),
        "monitoring": len(monitoring_spec.content),
    },
    "readiness": readiness_scores,
    "deployment_ready": deployment_ready,
}

print(json.dumps(deployment_summary, indent=2))

Exercises

  1. Add rollback scenario: Simulate a production alert triggering the automated rollback script.
  2. Multi-environment: Extend the pipeline to deploy to dev, staging, and production with environment-specific configurations.
  3. Governance gate: Add a governance review step between staging and production deployment.
  4. Observability integration: Use the FCC observability module (fcc.observability) to trace the deployment workflow itself.

Summary

In this scenario you executed a DevOps deployment chain:

  • JUS defined deployment requirements as user stories with acceptance criteria
  • PBD constructed an 8-stage CI/CD pipeline (GitHub Actions)
  • DVE implemented Kubernetes deployment infrastructure with Helm charts
  • ASC created monitoring, alerting, and automated rollback scripts
  • A readiness assessment evaluated deployment preparedness

Next Steps