Opening Framing: Boundaries Are Not Natural
In Week 2, you learned that processes are execution contexts. In Week 3, that identity is contextual. In Week 4, that data persists beyond identity. Now we ask: what keeps all of these separate? What prevents one process from reading another's memory, one user's data from leaking to another?
People often speak about "system boundaries" as if they exist the way walls exist in a building. But boundaries inside computers are not natural objects. They are rules.
A computer's components are physically connected. Electrical signals flow. Memory is reused. CPU time is shared. Devices are multiplexed.
If separation exists, it exists because the operating system enforces it.
This week teaches a mental model you must never forget:
Isolation is an enforced illusion.
1) What Is Memory, Really?
In casual talk, memory is "where a program keeps data while it runs." That description is true but insufficient for security reasoning.
Memory is the resource that makes execution real:
- Code is loaded into memory
- Variables and secrets exist in memory
- Keys, tokens, credentials, and session state live in memory
If an attacker can read or influence memory they shouldn't, many higher-level security controls become irrelevant.
Mental model: If the filesystem is the system's long-term memory, RAM is its present tense. Compromise memory, and you compromise everything that's currently happening.
2) Why Isolation Matters
Modern systems run many processes at once. Some are yours. Some belong to other users. Some belong to the operating system itself.
Isolation answers the question:
How can the system allow sharing of hardware without allowing interference?
Security depends on the idea that:
- Your process cannot read another process's memory
- Your process cannot overwrite system memory
- One process crashing should not collapse everything else
If isolation fails, "permissions" become theatre — because permissions are enforced by the OS, and the OS itself is implemented in memory.
Key insight: memory isolation is the foundation that makes all other security controls possible. If processes can read each other's memory, file permissions, network controls, and authentication all become meaningless.
3) Boundaries Are Policed, Not Guaranteed
Here is the uncomfortable point: the boundary between "allowed" and "not allowed" is not a physical barrier. It is a set of checks performed by the system.
Every check can fail:
- Through bugs: buffer overflows, use-after-free, integer overflows
- Through complexity: the more code enforcing boundaries, the more potential flaws
- Through configuration: debugging features, shared memory segments, permissive settings
- Through resource exhaustion: memory pressure can cause systems to behave unexpectedly
- Through hardware: CPU vulnerabilities like Meltdown and Spectre bypass software isolation
This is why experienced defenders do not treat isolation as absolute. They treat it as a guarantee with conditions.
Mental model: isolation is a promise the system tries to keep, not a law of nature.
4) Preview: Isolation at Scale
The isolation concepts you've learned this week are foundational. They return in expanded form:
- Week 6 (Services): services run in isolated contexts with their own identities
- Week 8 (Networking): network isolation determines what can communicate with what
- Week 11 (Virtualisation): VMs and containers are isolation taken to the extreme — entire operating systems separated from each other
The principle remains the same: isolation is enforced separation. Whether it's memory between processes, network segments between hosts, or entire operating systems between VMs — the question is always the same: what enforces the boundary, and what happens when it fails?
Key insight: Week 5 completes the foundation. Everything from Week 6 onward builds on processes, identity, persistence, and isolation. You now have the conceptual tools to analyze any system.
Real-World Context: When Isolation Fails
Memory isolation failures have caused some of the most severe vulnerabilities in computing history:
Meltdown and Spectre (2018): These CPU vulnerabilities allowed processes to read memory they shouldn't have access to — including kernel memory and other processes' data. The isolation enforced by the operating system was bypassed by exploiting how CPUs speculatively execute instructions. Every major processor vendor was affected. The "enforced illusion" of memory isolation was broken at the hardware level.
Heartbleed (2014): A bug in OpenSSL allowed attackers to read up to 64KB of server memory per request. That memory could contain private keys, passwords, session tokens — whatever happened to be in memory at the time. The boundary between "data I should see" and "data I shouldn't see" was violated through a simple bounds-checking error.
Buffer Overflow Exploits: Classic buffer overflows allow attackers to write beyond allocated memory boundaries, potentially overwriting return addresses to hijack execution. This entire class of vulnerability exists because memory boundaries are enforced by software checks that can be bypassed. MITRE ATT&CK documents exploitation for privilege escalation as technique T1068.
Common thread: isolation is only as strong as its enforcement. Hardware bugs, software bugs, and configuration errors can all break the boundaries we depend on. Defenders must assume boundaries can fail and design accordingly.
Guided Lab: Observing Memory and Contention
This lab stays conceptual: we are not exploiting memory. We are observing how memory behaves under normal pressure, and how resource competition can create instability and risk.