Skip to content

Week 11 Quiz

Test your understanding of the weekly concepts.

Format: 10 multiple-choice questions. Passing score: 70%. Time: Untimed.

Take Quiz
CSY304 Week 11 Advanced

The gold standard for methodology.

IoT & Embedded Systems Security

Track your progress through this week's content

Opening Framing

After ten weeks of breaking IoT devices, we shift perspective to building them securely. We know attacks work because developers make mistakes—buffer overflows, hardcoded keys, cleartext protocols.

The Shift Left: Traditional security testing happens at the end (Pen Testing). This is too late. Fixing a bug in design costs $1. Fixing it in code costs $10. Fixing it in production costs $10,000 (Recall). We must move security to the left of the timeline.

Week Learning Outcomes:
  • Architect a Secure Boot Chain from ROM to User Space.
  • Implement a DevSecOps pipeline for embedded C/C++ (SAST/DAST).
  • Fuzz protocols using AFL++ to find zero-day crashes.
  • Generate an SBOM (Software Bill of Materials) to track supply chain risks.
  • Sign firmware packages using asymmetric cryptography (PKI).

1) The Secure Boot Chain: Foundation of Trust

If you cannot trust the code running on the device, nothing else matters. Secure boot ensures that only Authorized code executes.


[ THE CHAIN OF TRUST ]

      (Power On)
          |
    [ BOOT ROM ] (Immutable, Hardware Root of Trust)
          |      Verifies Stage 1 Signature (Public Key Hash in Fuses)
          v
    [ STAGE 1 BOOTLOADER ] (SPL / FSBL)
          |      Verifies Stage 2 Signature (U-Boot/LittleKernel)
          v
    [ STAGE 2 BOOTLOADER ] (U-Boot)
          |      Verifies Kernel Signature (ITB/FIT Image)
          v
    [ LINUX KERNEL ]
          |      Verifies Root Filesystem (dm-verity)
          v
    [ USER SPACE ] (Application)
                    

Key Concepts

  • Root of Trust (RoT): The component that is implicitly trusted. Usually Mask ROM + eFuse (OTP).
  • eFuse (One-Time Programmable): Physical fuses burned during manufacturing. Contains the hash of the Public Key. If the firmware signature doesn't match this hash, the CPU halts.
  • Anti-Rollback: An eFuse counter prevents downgrading to an older, vulnerable firmware version.

The Signing Ceremony

How do we actually sign firmware? We need an Off-Line Root CA.

OpenSSL Signing Workflow

# 1. Generate Private Key (Keep this OFFLINE/HSM!)
openssl genrsa -out private_key.pem 2048

# 2. Extract Public Key (Embed this in Device/Bootloader)
openssl rsa -in private_key.pem -pubout -out public_key.pem

# 3. Hash the Firmware
sha256sum firmware.bin > firmware.hash

# 4. Sign the Hash (Create Signature)
openssl dgst -sha256 -sign private_key.pem -out firmware.sig firmware.bin

# 5. Verify (On Device)
openssl dgst -sha256 -verify public_key.pem -signature firmware.sig firmware.bin
                    

2) Hardware Security Modules (HSM) vs TPM vs SE

Where do you store the secrets? (AWS Keys, Private Keys).

Component Cost Use Case Strengths
TPM 2.0 $1-3 PC/Server/Gateway Standardized API, Rich Feature Set (PCRs).
Secure Element (SE) $0.50 IoT Endpoints (ATECC608A) Cheap, Ultra-Low Power, I2C Interface.
PUF (Physically Unclonable Function) $$$ High Assurance Keys derived from silicon physics. Impossible to clone.
TrustZone (TEE) $0 (IP) ARM Cortex-A/M Virtual separation (Secure World vs Normal World).
Rule: Never store keys in the filesystem (even if encrypted). Store them in hardware.

3) The DevSecOps Pipeline

Automating security checks into the CI/CD pipeline. Security scales only if it is automated.

A) Secret Scanning (Pre-Commit)

Stop developers from committing AWS keys.


# Tools: TruffleHog, GitLeaks
# Example Gitleaks config (.gitleaks.toml)
[[rules]]
description = "AWS Access Key"
regex = '''(A3T[A-Z0-9]|AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}'''
                    

B) SAST (Static Application Security Testing)

Analyzes source code without running it.

Semgrep Rule (Banned Functions)

rules:
  - id: ban-strcpy
    patterns:
      - pattern: strcpy(...)
    message: "Use strncpy or strlcpy instead. strcpy is unsafe."
    languages: [c, cpp]
    severity: ERROR
                    

C) Automated Pipeline Example (GitLab CI)

.gitlab-ci.yml

stages:
  - build
  - test
  - security

compile_firmware:
  stage: build
  script:
    - make all

static_analysis:
  stage: security
  image: returntocorp/semgrep
  script:
    - semgrep --config=p/c --error
  allow_failure: false  # Break build on security fail

secret_detection:
  stage: security
  image: zricethezav/gitleaks
  script:
    - gitleaks detect --source=. -v

dependency_scan:
  stage: security
  image: aquasec/trivy
  script:
    - trivy fs . --scanners vuln
                    

D) SCA & SBOM

Problem: Your code is 10% yours, 90% Open Source. The "Log4J" problem.

Solution: SBOM (Software Bill of Materials).

Example SBOM (CycloneDX JSON)

{
  "bomFormat": "CycloneDX",
  "specVersion": "1.4",
  "components": [
    {
      "type": "library",
      "name": "openssl",
      "version": "1.1.1t",
      "purl": "pkg:deb/debian/openssl@1.1.1t",
      "licenses": [
        {
          "license": {
            "id": "OpenSSL"
          }
        }
      ]
    },
    {
      "type": "library",
      "name": "cjson",
      "version": "1.7.15",
      "purl": "pkg:github/DaveGamble/cJSON@1.7.15"
    }
  ]
}
                    

4) Threat Modeling: STRIDE in Action

Before writing code, we must understand what we are building and how it can be attacked.

Example: Smart Door Lock

Threat STRIDE Category Attack Vector Mitigation
Spoofing Identity "I am the User's Phone." (Replay Attack). Challenge-Response Auth, Mutual TLS.
Tampering Integrity "Modify Firmware to bypass checks." Secure Boot (Signed Firmware).
Repudiation Logging "I didn't open the door." Secure Audit Logs (Signed Logs).
Information Disclosure Confidentiality "Sniffing WiFi Key Exchange." WPA3, TLS 1.3, Encrypted Storage.
Denial of Service Availability "Jamming the WiFi/BLE signal." Local Keypad Backup, Auto-Lock.
Elevation of Privilege Authorization "Guest becomes Admin." RBAC (Role Based Access Control).

5) Fuzzing: The Nuclear Option

Fuzzing involves throwing millions of random (malformed) inputs at a program to crash it. It is how 90% of memory corruption bugs are found today.

Types of Fuzzing

AFL++ Workflow

# 1. Compile with Instrumentation
afl-gcc -o vulnerable_parser parser.c

# 2. Prepare Corpus (Valid seed inputs)
mkdir in; echo '{"key":"value"}' > in/seed.json

# 3. Run Fuzzer
afl-fuzz -i in -o out ./vulnerable_parser @@

# AFL UI:
# [total paths]: 120
# [crashes]: 5  <-- VICTORY
                    

6) Memory Safety: Rust vs C

70% of all vulnerabilities (Microsoft & Chrome stats) are Memory Safety issues (Buffer Overflows, Use-After-Free).

C / C++

  • Manual memory management (`malloc`/`free`).
  • No bounds checking by default.
  • Undefined behavior is common.
  • Verdict: Powerful but like a chainsaw without a guard.

Rust

  • Borrow Checker enforces memory safety at compile time.
  • No Null Pointers.
  • No Data Races.
  • Verdict: The future of secure systems programming.

Recommendation: For new IoT development, prefer Rust (Embassy / TockOS) over C. If you must use C, use strict static analysis and sanitizers (ASan).

7) Secure Firmware Update (OTA)

You must be able to patch vulnerabilities. But the update mechanism itself is a huge attack vector.

The Update Triangle

  1. Authenticity: Is this update from us? (Signature Verification).
  2. Integrity: Is the file corrupted? (Hash Verification).
  3. Anti-Rollback: Is this an old, buggy version? (Version Checking).

Guided Lab: Fuzzing with AFL++

Objective: Use AFL++ to find a buffer overflow in a simple C parser.

Prerequisites: Docker installed.

Step 1: The Vulnerable Code (`vuln.c`)


#include <stdio.h>
#include <string.h>

void process_data(char *input) {
    char buffer[50];
    if (input[0] == 'M') {
        // VULNERABILITY: Unbounded strcpy if input starts with 'M'
        strcpy(buffer, input); 
    }
}

int main(int argc, char **argv) {
    // Read from file or stdin
    FILE *f = fopen(argv[1], "rb");
    if (!f) return 1;
    char input[1024];
    fread(input, 1, 1024, f);
    process_data(input);
    return 0;
}
                    

Step 2: Fuzzing Setup


# Pull AFL++ Docker
docker run -it --rm aflplusplus/aflplusplus

# Compile
afl-clang-fast vuln.c -o vuln

# Seed
mkdir in; echo "Hello" > in/seed.txt

# Fuzz!
afl-fuzz -i in -o out ./vuln @@
                    

Observation: Within seconds, AFL will find that inputs starting with 'M' followed by >50 chars cause a crash. It will store the crashing input in `out/crashes/`.

XP REWARD: +800 XP (Code Breaker)

Appendix A: Secure C Coding Standards

Appendix B: Crypto Implementation Checklist

Do's Don'ts
Use AES-GCM (Authenticated Encryption). Don't usage AES-ECB (Patterns are visible).
Use SHA-256 or SHA-3. Don't use MD5 or SHA-1 (Collisions possible).
Use PBKDF2 / Argon2 for passwords. Don't use simple Hashes for passwords.
Use /dev/urandom or Hardware TRNG. Don't use rand() or time-based seeds.
Verify Certificates (Chain of Trust). Don't use "SkipVerify: true" just to make it work.

Appendix C: Vulnerability Disclosure Policy (VDP)

Every company needs a SECURITY.md at the root of their repo. It tells researchers how to report bugs without getting sued.

Template: SECURITY.md

# Security Policy

## Supported Versions

| Version | Supported          |
| ------- | ------------------ |
| 5.1.x   | :white_check_mark: |
| 5.0.x   | :x:                |

## Reporting a Vulnerability

We take the security of our systems seriously.

If you believe you have found a security vulnerability, please report it to us as described below.

**Please do not report security vulnerabilities through public GitHub issues.**

### Reporting Process
1. Email your findings to **security@example.com**.
2. Include steps to reproduce the issue.
3. We will acknowledge receipt within 48 hours.
4. We will provide an estimated timeline for the fix.

### Safe Harbor
We pledge not to initiate legal action against researchers for penetrating or attempting to penetrate our systems as long as they adhere to this policy.
                    

Appendix D: OWASP IoT Top 10 Mapping

How this course mapped to the top threats.

OWASP Risk Week Covered
I1. Weak, Guessable, or Hardcoded Passwords Week 04 (Extraction) & Week 05 (RE)
I2. Insecure Network Services Week 02 (Protocols) & Week 08 (API)
I3. Insecure Ecosystem Interfaces Week 08 (Cloud/API) & Week 09 (ICS)
I4. Lack of Secure Update Mechanism Week 11 (Secure Dev)
I5. Use of Insecure or Outdated Components Week 11 (SBOM)
I6. Insufficient Privacy Support Week 10 (Medical IoT)
I7. Insecure Data Transfer and Storage Week 05 (Flash Dumps)
I8. Lack of Device Management Week 09 (ICS/SCADA)
I9. Insecure Default Settings Week 01 (IoT Fundamentals)
I10. Lack of Physical Hardening Week 06 (Hardware Hacking)

Appendix E: Incident Response Playbook

What happens when you do get hacked?

Resources

OWASP IoT Security Testing Guide

The gold standard for methodology.

AFL++ (Fuzzing)

The most advanced fuzzer available.

CycloneDX (SBOM)

Standard for Software Bill of Materials.