Skip to content
CSY105 Week 12 Beginner

Week Content

Programming for Security

Track your progress through this week's content

Week Overview

Week 12 is your capstone: build a complete, professional security tool that demonstrates technical depth, responsible handling, and clear documentation.

  • Choose a capstone track aligned with your interests
  • Design a modular architecture and data flow
  • Implement, test, and document a 500+ line tool
  • Deliver a report, demo video, and ethical use agreement
  • Present findings and defend design choices
Ethical Warning: Capstone tools must only run on authorized systems. Violations of scope or data handling rules can lead to course failure and disciplinary action.
Real-World Context: Security teams evaluate tooling on reliability, evidence quality, and documentation clarity. The best tools are boring, safe, and predictable in production.

Section 1: Project Options and Scope

Choose One Capstone Track

Option Focus Core Modules
Network Security Scanner Suite Port scans, service detection, reporting Scanner, service profiler, CVE matcher, report builder
Web Application Security Tester OWASP testing and crawler Crawler, input validator, vulnerability checks, report
Security Operations Automation Platform Logs, anomalies, alerting Ingest, normalize, detect, enrich, dashboard
Incident Response Toolkit Artifact collection and triage Collector, IOC extractor, timeline, report

Scope Statement Template

Project Name: ____________________________________
Scope: Authorized lab-only testing on 192.168.56.0/24
Out of Scope: Internet targets, production data
Data Handling: Local storage only, redact PII
Success Criteria: Tool runs end-to-end with report output

Decision Matrix

Option | Interest | Difficulty | Tools Used | Score
Network |    4     |     3      |   5        | 12
Web     |    5     |     4      |   4        | 13
SOC     |    3     |     4      |   5        | 12
IR      |    4     |     3      |   4        | 11

Section 2: Architecture and Data Flow

Modular Design Principles

  • Separate data collection, analysis, and reporting
  • Define clear interfaces between modules
  • Use typed result objects for consistency
  • Keep configuration in one place

Reference Architecture (Text Diagram)

[Input] --> [Collector] --> [Normalizer] --> [Analyzer] --> [Reporter]
   |             |                |                |              |
 config       logs             schema          alerts         report.md

Project Skeleton Layout

capstone/
├── README.md
├── config/
│   └── settings.json
├── src/
│   ├── main.py
│   ├── core/
│   │   ├── config.py
│   │   ├── logging.py
│   │   └── results.py
│   ├── modules/
│   │   ├── collector.py
│   │   ├── analyzer.py
│   │   └── reporter.py
│   └── utils/
│       ├── validators.py
│       └── rate_limit.py
├── data/
├── reports/
└── tests/

Result Envelope (Typed)

#!/usr/bin/env python3
"""
Shared result envelope for consistent module outputs.
"""
from __future__ import annotations

from dataclasses import dataclass
from typing import Dict, Optional


@dataclass
class Result:
    """
    Standard result object for module outputs.
    """
    ok: bool
    data: Dict[str, str]
    error: Optional[str] = None

    def to_dict(self) -> Dict[str, str]:
        """Serialize result to dict."""
        return {
            "ok": str(self.ok),
            "data": str(self.data),
            "error": self.error or "",
        }


if __name__ == "__main__":
    print(Result(ok=True, data={"status": "ready"}).to_dict())

Section 3: Project Planning and Milestones

Milestone Plan (Example)

Day 1: Select project + define scope
Day 2: Build skeleton + config + logging
Day 3: Implement core modules
Day 4: Testing + report generation
Day 5: Documentation + demo video

Task Backlog Template (CSV)

id,task,owner,status,eta
1,Define scope,student,done,2h
2,Build config loader,student,in_progress,3h
3,Implement analyzer,student,todo,6h

Risk Register

Risk Impact Mitigation
Scope creep Late delivery Lock scope early
API rate limits Missing enrichment Cache results
Incomplete testing Hidden bugs Test plan + checklist

Section 4: Core Implementation Patterns

Configuration Loader

#!/usr/bin/env python3
"""
Load configuration with validation.
"""
from __future__ import annotations

import json
from dataclasses import dataclass
from typing import List


@dataclass
class Settings:
    """
    Project settings with safe defaults.
    """
    scope: List[str]
    timeout: int
    output_dir: str


def load_settings(path: str) -> Settings:
    """
    Read settings from JSON and validate.
    """
    with open(path, "r", encoding="utf-8") as handle:
        data = json.load(handle)

    scope = data.get("scope", [])
    if not isinstance(scope, list) or not scope:
        raise ValueError("scope must be a non-empty list")

    timeout = int(data.get("timeout", 5))
    if timeout < 1:
        raise ValueError("timeout must be positive")

    output_dir = data.get("output_dir", "./reports")
    return Settings(scope=scope, timeout=timeout, output_dir=output_dir)


if __name__ == "__main__":
    print(load_settings("config/settings.json"))

Structured Logging

#!/usr/bin/env python3
"""
Create structured JSON logs for auditability.
"""
from __future__ import annotations

import json
from datetime import datetime
from typing import Dict


def log_event(event_type: str, detail: Dict[str, str], path: str = "run.log") -> None:
    """
    Append a JSON log entry.
    """
    entry = {
        "timestamp": datetime.utcnow().isoformat(),
        "event_type": event_type,
        "detail": detail,
    }
    with open(path, "a", encoding="utf-8") as handle:
        handle.write(json.dumps(entry) + "\n")


if __name__ == "__main__":
    log_event("startup", {"module": "main"})

Retry Wrapper

#!/usr/bin/env python3
"""
Retry wrapper for unstable operations.
"""
from __future__ import annotations

import time
from typing import Callable, TypeVar


T = TypeVar("T")


def retry(func: Callable[[], T], retries: int = 3, delay: float = 0.5) -> T:
    """
    Retry a function with backoff.
    """
    last_error: Exception | None = None
    for attempt in range(retries):
        try:
            return func()
        except Exception as exc:
            last_error = exc
            time.sleep(delay * (attempt + 1))
    raise RuntimeError(f"Operation failed after retries: {last_error}")


if __name__ == "__main__":
    print(retry(lambda: "ok"))

Section 5: Security Controls

Scope Allowlist Validator

#!/usr/bin/env python3
"""
Enforce scope allowlists for safety.
"""
from __future__ import annotations

from ipaddress import ip_network, ip_address
from typing import List


def in_scope(target: str, allowed: List[str]) -> bool:
    """
    Return True if target is in allowed scope.
    """
    target_ip = ip_address(target)
    for block in allowed:
        if target_ip in ip_network(block):
            return True
    return False


if __name__ == "__main__":
    allowed = ["192.168.56.0/24"]
    print(in_scope("192.168.56.10", allowed))

Rate Limiter (Token Bucket)

#!/usr/bin/env python3
"""
Rate-limit scanning actions to reduce risk.
"""
from __future__ import annotations

import time


class TokenBucket:
    """
    Simple token bucket implementation.
    """

    def __init__(self, rate: float, capacity: int) -> None:
        self.rate = rate
        self.capacity = capacity
        self.tokens = capacity
        self.last_check = time.time()

    def consume(self, tokens: int = 1) -> bool:
        """Consume tokens if available."""
        now = time.time()
        delta = now - self.last_check
        self.tokens = min(self.capacity, self.tokens + delta * self.rate)
        self.last_check = now

        if self.tokens >= tokens:
            self.tokens -= tokens
            return True
        return False


if __name__ == "__main__":
    bucket = TokenBucket(rate=2, capacity=5)
    print(bucket.consume())

Safe Output Writer

#!/usr/bin/env python3
"""
Write outputs only to approved directories.
"""
from __future__ import annotations

from pathlib import Path
from typing import Dict


def safe_write(output_dir: str, filename: str, content: str) -> Path:
    """
    Write content to a safe output directory.
    """
    base = Path(output_dir).resolve()
    base.mkdir(parents=True, exist_ok=True)
    path = (base / filename).resolve()

    if base not in path.parents:
        raise ValueError("Unsafe output path")

    path.write_text(content, encoding="utf-8")
    return path


if __name__ == "__main__":
    safe_write("./reports", "summary.txt", "ok")

Section 6: Track-Specific Starter Kits

Option A: Network Scanner (Safe)

#!/usr/bin/env python3
"""
Simple TCP port scanner with allowlist and rate limiting.
"""
from __future__ import annotations

import socket
from typing import Dict, List

from scope import in_scope
from rate_limit import TokenBucket


def scan_ports(target: str, ports: List[int], allowed: List[str]) -> Dict[int, str]:
    """
    Return port status for allowed targets only.
    """
    if not in_scope(target, allowed):
        raise ValueError("Target out of scope")

    results = {}
    bucket = TokenBucket(rate=5, capacity=10)

    for port in ports:
        while not bucket.consume():
            continue

        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.settimeout(0.5)
        try:
            sock.connect((target, port))
            results[port] = "open"
        except (socket.timeout, OSError):
            results[port] = "closed"
        finally:
            sock.close()

    return results


if __name__ == "__main__":
    print(scan_ports("192.168.56.10", [22, 80, 443], ["192.168.56.0/24"]))

Option B: Web Tester (Safe)

#!/usr/bin/env python3
"""
Basic URL crawler with scope limits.
"""
from __future__ import annotations

import re
from typing import List, Set

import requests
from bs4 import BeautifulSoup


def crawl(start_url: str, max_pages: int = 20) -> List[str]:
    """
    Crawl a site for internal links.
    """
    visited: Set[str] = set()
    to_visit = [start_url]
    links: List[str] = []

    while to_visit and len(visited) < max_pages:
        url = to_visit.pop()
        if url in visited:
            continue
        visited.add(url)

        try:
            response = requests.get(url, timeout=5)
            response.raise_for_status()
        except requests.RequestException:
            continue

        soup = BeautifulSoup(response.text, "html.parser")
        for anchor in soup.find_all("a", href=True):
            href = anchor["href"]
            if href.startswith("/"):
                full = start_url.rstrip("/") + href
                if full not in visited:
                    to_visit.append(full)
                    links.append(full)

    return links


if __name__ == "__main__":
    print(crawl("http://localhost:8080"))

Option C: SOC Automation

#!/usr/bin/env python3
"""
Normalize and alert on suspicious log events.
"""
from __future__ import annotations

import json
from typing import Dict, List


def normalize(event: Dict[str, str]) -> Dict[str, str]:
    """
    Normalize event to a consistent schema.
    """
    return {
        "timestamp": event.get("timestamp", ""),
        "host": event.get("host", "unknown"),
        "user": event.get("user", "unknown"),
        "message": event.get("message", ""),
    }


def detect_failed_login(event: Dict[str, str]) -> bool:
    """
    Detect failed login patterns.
    """
    return "failed" in event.get("message", "").lower()


def process_events(events: List[Dict[str, str]]) -> List[Dict[str, str]]:
    """
    Return alert list.
    """
    alerts = []
    for raw in events:
        event = normalize(raw)
        if detect_failed_login(event):
            alerts.append({"severity": "medium", "event": event})
    return alerts


if __name__ == "__main__":
    sample = [{"timestamp": "2026-01-20", "message": "Failed login"}]
    print(process_events(sample))

Option D: Incident Response Toolkit

#!/usr/bin/env python3
"""
Collect basic triage artifacts from a host.
"""
from __future__ import annotations

import json
import platform
from typing import Dict

import psutil


def collect_host_info() -> Dict[str, str]:
    """
    Return host metadata for triage.
    """
    return {
        "hostname": platform.node(),
        "os": platform.platform(),
        "users": str(len(psutil.users())),
        "processes": str(len(psutil.pids())),
    }


def save_report(info: Dict[str, str], path: str) -> None:
    """
    Save triage report as JSON.
    """
    with open(path, "w", encoding="utf-8") as handle:
        json.dump(info, handle, indent=2)


if __name__ == "__main__":
    save_report(collect_host_info(), "triage.json")

Section 7: Testing and Validation

Test Plan Checklist

  • Unit tests for core modules
  • Integration test for end-to-end workflow
  • Negative tests (invalid inputs, out-of-scope)
  • Performance check (time-limited run)

Simple Test Harness

#!/usr/bin/env python3
"""
Minimal test runner for core modules.
"""
from __future__ import annotations

from typing import Callable, Dict, List


def run_tests(tests: List[Callable[[], None]]) -> Dict[str, int]:
    """
    Execute tests and summarize results.
    """
    passed = 0
    failed = 0
    for test in tests:
        try:
            test()
            passed += 1
        except Exception as exc:
            failed += 1
            print(f"[FAIL] {test.__name__}: {exc}")
    return {"passed": passed, "failed": failed}


if __name__ == "__main__":
    def test_example() -> None:
        assert 1 + 1 == 2

    print(run_tests([test_example]))

Scope Enforcement Test

#!/usr/bin/env python3
"""
Validate scope enforcement logic.
"""
from __future__ import annotations

from scope import in_scope


def test_scope() -> None:
    """
    Assert in-scope behavior.
    """
    allowed = ["192.168.56.0/24"]
    assert in_scope("192.168.56.10", allowed) is True
    assert in_scope("8.8.8.8", allowed) is False


if __name__ == "__main__":
    test_scope()
    print("Scope tests passed")

Section 8: Reporting and Presentation

Executive Summary Template

Executive Summary:
- Tool purpose and scope
- Key findings or outputs
- Security impact and recommended actions
- Limitations and future work

HTML Report Generator

#!/usr/bin/env python3
"""
Generate a simple HTML report.
"""
from __future__ import annotations

from datetime import datetime
from typing import Dict


def build_report(findings: Dict[str, str]) -> str:
    """
    Build an HTML report string.
    """
    rows = "".join(
        f"<tr><td>{key}</td><td>{value}</td></tr>"
        for key, value in findings.items()
    )
    return f"""
    <html>
      <body>
        <h1>Capstone Report</h1>
        <p>Generated: {datetime.utcnow().isoformat()}</p>
        <table border='1'>
          <tr><th>Finding</th><th>Detail</th></tr>
          {rows}
        </table>
      </body>
    </html>
    """


if __name__ == "__main__":
    html = build_report({"alerts": "5", "errors": "0"})
    with open("report.html", "w", encoding="utf-8") as handle:
        handle.write(html)

Demo Video Outline

  • Introduce project and scope (30s)
  • Show tool architecture (60s)
  • Run tool end-to-end (2-4 min)
  • Explain results and limitations (1 min)

Section 9: Ethics and Compliance

Ethical Use Agreement (Excerpt)

I will only run this tool on authorized systems.
I will not store or exfiltrate sensitive data.
I will follow the defined scope and report issues responsibly.

Data Handling Rules

  • Redact PII before sharing reports
  • Store logs locally and delete after grading
  • Use hashes instead of raw sensitive values
  • Document consent and authorization

Section 10: Option Deep Dive - Network Scanner Suite

Module Checklist

  • Port scanner with rate limiting
  • Service banner detection
  • Local CVE matcher (offline JSON)
  • HTML/Markdown reporting

Banner Grabber (Safe)

#!/usr/bin/env python3
"""
Grab a banner from a TCP service for identification.
"""
from __future__ import annotations

import socket
from typing import Optional


def grab_banner(host: str, port: int, timeout: float = 1.0) -> Optional[str]:
    """
    Return a service banner or None if unavailable.
    """
    try:
        with socket.create_connection((host, port), timeout=timeout) as sock:
            sock.sendall(b"\\r\\n")
            data = sock.recv(1024)
            return data.decode(errors="ignore").strip()
    except OSError:
        return None


if __name__ == "__main__":
    print(grab_banner("192.168.56.10", 22))

Offline CVE Matcher (JSON)

#!/usr/bin/env python3
"""
Match service banners to a local CVE feed.
"""
from __future__ import annotations

import json
from typing import Dict, List


def load_cves(path: str) -> List[Dict[str, str]]:
    """
    Load CVE data from a local JSON file.
    """
    with open(path, "r", encoding="utf-8") as handle:
        return json.load(handle)


def match_cves(banner: str, cves: List[Dict[str, str]]) -> List[Dict[str, str]]:
    """
    Return CVEs that match a banner substring.
    """
    hits = []
    for entry in cves:
        if entry.get("product", "").lower() in banner.lower():
            hits.append(entry)
    return hits


if __name__ == "__main__":
    data = load_cves("cves.json")
    print(match_cves("OpenSSH_8.2", data))

Section 11: Option Deep Dive - Web Application Tester

Module Checklist

  • Crawler with scope limits
  • Input parameter discovery
  • OWASP checks (safe, lab-only)
  • Report with evidence snippets

Parameter Extractor

#!/usr/bin/env python3
"""
Extract query parameters from URLs for testing.
"""
from __future__ import annotations

from urllib.parse import urlparse, parse_qs
from typing import Dict, List


def extract_params(url: str) -> Dict[str, List[str]]:
    """
    Return query parameters as a dict.
    """
    parsed = urlparse(url)
    return parse_qs(parsed.query)


if __name__ == "__main__":
    print(extract_params("http://localhost/search?q=test&lang=en"))

Safe Injection Probe (Lab Only)

#!/usr/bin/env python3
"""
Send a benign test payload and look for reflection.
"""
from __future__ import annotations

import requests
from typing import Dict


def reflection_probe(url: str, param: str) -> Dict[str, str]:
    """
    Test a parameter for reflected output (non-exploitative).
    """
    payload = "TEST_REFLECT"
    try:
        response = requests.get(url, params={param: payload}, timeout=5)
        response.raise_for_status()
    except requests.RequestException as exc:
        return {"error": str(exc)}

    if payload in response.text:
        return {"status": "reflected", "param": param}
    return {"status": "not_reflected", "param": param}


if __name__ == "__main__":
    print(reflection_probe("http://localhost/search", "q"))

Section 12: Option Deep Dive - SOC Automation

Module Checklist

  • Multi-source log ingestion
  • Normalization to common schema
  • Baseline anomaly detection
  • Alert enrichment and dashboard

Baseline Builder

#!/usr/bin/env python3
"""
Build a baseline for failed logins per user.
"""
from __future__ import annotations

import pandas as pd
from typing import Dict


def build_baseline(path: str) -> pd.DataFrame:
    """
    Load CSV and compute per-user baseline.
    """
    df = pd.read_csv(path)
    df["timestamp"] = pd.to_datetime(df["timestamp"], errors="coerce")
    df = df.dropna(subset=["timestamp"])

    failures = df[df["outcome"] == "failed"]
    baseline = failures.groupby("username").size().reset_index(name="failed_count")
    return baseline.sort_values("failed_count", ascending=False)


if __name__ == "__main__":
    print(build_baseline("normalized_logins.csv").head())

Alert Aggregator

#!/usr/bin/env python3
"""
Aggregate alerts by severity for reporting.
"""
from __future__ import annotations

from collections import Counter
from typing import Dict, List


def summarize_alerts(alerts: List[Dict[str, str]]) -> Dict[str, int]:
    """
    Return severity counts.
    """
    counter = Counter(a.get("severity", "unknown") for a in alerts)
    return dict(counter)


if __name__ == "__main__":
    print(summarize_alerts([{"severity": "high"}, {"severity": "low"}]))

Section 13: Option Deep Dive - Incident Response Toolkit

Module Checklist

  • System info and process snapshot
  • File hash collection
  • IOC extraction
  • Timeline report

File Hash Collector

#!/usr/bin/env python3
"""
Hash files in a directory for triage.
"""
from __future__ import annotations

import hashlib
from pathlib import Path
from typing import Dict


def sha256_file(path: Path) -> str:
    """
    Return SHA256 hash for a file.
    """
    hasher = hashlib.sha256()
    with path.open("rb") as handle:
        for chunk in iter(lambda: handle.read(8192), b""):
            hasher.update(chunk)
    return hasher.hexdigest()


def collect_hashes(root: str) -> Dict[str, str]:
    """
    Return dict of file path -> hash.
    """
    results = {}
    for path in Path(root).rglob("*"):
        if path.is_file():
            try:
                results[str(path)] = sha256_file(path)
            except OSError:
                continue
    return results


if __name__ == "__main__":
    print(collect_hashes("./samples"))

Timeline Generator (CSV)

#!/usr/bin/env python3
"""
Create a file activity timeline from metadata.
"""
from __future__ import annotations

from pathlib import Path
from typing import List, Dict


def build_timeline(root: str) -> List[Dict[str, str]]:
    """
    Return list of file events with timestamps.
    """
    events = []
    for path in Path(root).rglob("*"):
        if not path.is_file():
            continue
        try:
            stat = path.stat()
            events.append({
                "file": str(path),
                "modified": str(stat.st_mtime),
                "created": str(stat.st_ctime),
            })
        except OSError:
            continue
    return events


if __name__ == "__main__":
    print(build_timeline("./samples")[:3])

Section 14: Metrics and Visualization

Quick KPI Summary

  • Total targets scanned
  • Alerts generated
  • Critical findings
  • Average runtime

Chart Example (Matplotlib)

#!/usr/bin/env python3
"""
Plot findings by severity.
"""
from __future__ import annotations

import matplotlib.pyplot as plt


def plot_severity(counts: dict) -> None:
    """
    Plot bar chart of severity counts.
    """
    labels = list(counts.keys())
    values = list(counts.values())
    plt.bar(labels, values)
    plt.title("Findings by Severity")
    plt.xlabel("Severity")
    plt.ylabel("Count")
    plt.tight_layout()
    plt.savefig("severity_chart.png")


if __name__ == "__main__":
    plot_severity({"low": 5, "medium": 3, "high": 1})

Section 15: Packaging and Release

CLI Entrypoint

#!/usr/bin/env python3
"""
Command-line interface for the capstone tool.
"""
from __future__ import annotations

import argparse


def build_parser() -> argparse.ArgumentParser:
    """
    Build CLI parser with subcommands.
    """
    parser = argparse.ArgumentParser(description="Capstone security tool")
    parser.add_argument("--config", default="config/settings.json")
    sub = parser.add_subparsers(dest="command", required=True)
    sub.add_parser("scan")
    sub.add_parser("report")
    return parser


def main() -> None:
    """
    Run CLI.
    """
    parser = build_parser()
    args = parser.parse_args()
    print(f"Command: {args.command}")


if __name__ == "__main__":
    main()

requirements.txt Example

requests==2.32.3
psutil==6.0.0
pandas==2.2.2
matplotlib==3.9.0

Virtual Environment Steps

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Versioning Note

  • Use semantic versioning: MAJOR.MINOR.PATCH
  • Tag releases after each milestone
  • Document changes in a CHANGELOG

Section 16: Rubric and Submission Checklist

Evaluation Criteria

Category Weight Evidence
Functionality 40% Working tool demo
Code quality 20% Linting, structure
Documentation 15% README + report
Security practices 15% Scope, logs, controls
Presentation 10% Demo video

Submission Checklist

  • Tool runs end-to-end with no errors
  • README includes setup, usage, and scope
  • Report generated with evidence
  • Demo video uploaded
  • Ethical use agreement signed

Section 17: Common Pitfalls and Fixes

Frequent Capstone Issues

  • Unbounded scans: Always enforce scope allowlists
  • Silent failures: Log errors and exceptions with context
  • Missing evidence: Save raw outputs and hashes
  • Overcomplex scope: Cut features if delivery risk is high

Health Check Endpoint

#!/usr/bin/env python3
"""
Add a basic health check to validate readiness.
"""
from __future__ import annotations

from datetime import datetime
from typing import Dict


def health_check() -> Dict[str, str]:
    """
    Return readiness information.
    """
    return {
        "status": "ok",
        "timestamp": datetime.utcnow().isoformat(),
    }


if __name__ == "__main__":
    print(health_check())

Lab 12: Capstone Build Sprint (Project Week)

Lab Safety: Your tool must only target authorized lab systems. Do not scan the public internet.

Lab Part 1: Proposal and Scope (20-30 min)

Objective: Choose a capstone track and define scope.

Requirements:

  • Select one of the four project options
  • Write a scope statement and decision matrix
  • Define success criteria and limitations

Success Criteria: Proposal approved by instructor.

Hint: Scope statement
Scope: Lab-only 192.168.56.0/24
Out of Scope: Production systems, public IPs
Data: Logs stored locally, PII redacted

Lab Part 2: Architecture & Skeleton (30-40 min)

Objective: Build the project skeleton and core modules.

Requirements:

  • Create config loader and logging
  • Define module interfaces and results
  • Stub collector, analyzer, reporter

Success Criteria: Running main.py executes the pipeline with mock data.

Hint: Result envelope
result = Result(ok=True, data={"status": "mock"})
print(result.to_dict())

Lab Part 3: Implementation & Testing (60-90 min)

Objective: Implement core features and validate behavior.

Requirements:

  • Complete collection and analysis modules
  • Enforce allowlists and rate limiting
  • Create unit tests for core logic

Success Criteria: Tests pass and output is stable.

Hint: Test plan
- invalid target rejected
- out-of-scope IP blocked
- report generated successfully

Lab Part 4: Documentation & Demo (30-45 min)

Objective: Deliver professional documentation and demo.

Requirements:

  • Write README with setup and usage
  • Generate HTML/Markdown report
  • Record a 5-10 min demo video

Success Criteria: Project package is presentation-ready.

Hint: README outline
# Project Title
## Overview
## Installation
## Usage
## Safety and Scope
## Outputs

Stretch Challenges (Optional)

  • Implement a plugin system for new checks
  • Add CSV export alongside HTML reports
  • Build a simple web dashboard for results
Hint: Plugin registry
PLUGINS = {}
def register(name: str, handler) -> None:
    PLUGINS[name] = handler
Capstone Complete! You have delivered a full security tool with documentation, tests, and ethical controls.

Deliverables:

  • Working tool (500+ lines of code)
  • Professional documentation (README + report)
  • Demo video (5-10 minutes)
  • Presentation slides (if required)
  • Ethical use agreement

Additional Resources

Capstone References

Documentation Guides

Presentation Tips

  • Start with the problem statement
  • Show architecture before the demo
  • Explain limitations and future work

Key Takeaways

  • Capstone tools must be safe, scoped, and auditable
  • Documentation and evidence are as important as code
  • Testing and validation protect users and systems
  • Clear reporting drives actionable security outcomes
  • Ethical controls are mandatory for offensive capabilities

Week 12 Quiz

Test your understanding of capstone project requirements.

Format: 10 multiple-choice questions. Passing score: 70%. Time: Untimed.

Take Quiz