Skip to content
CSY102 Week 09 Beginner

Practice log analysis and evidence collection before moving to reading resources.

Computing Systems & OS

Track your progress through this week's content

Opening Framing: Security Without Evidence Is Belief

Weeks 6–8 taught you how systems run in the background, act in the future, and expose interfaces to the network. But one critical problem remains:

How do we know what actually happened?

If a system is compromised, the attacker's first strategic advantage is not just access — it is uncertainty. Confusion buys time. If defenders don't know what happened, they can't respond effectively. They can't scope the damage. They can't prove anything to anyone.

Systems are not secured by control alone — they are secured by truth. This week you learn how systems record events, what logs can and cannot prove, and how attackers try to rewrite reality.

Logging is the system's memory of events. Without it, security is guesswork.

Mental Model: The Black Box Recorder

In aviation, a black box does not prevent crashes. It makes crashes explainable. After something goes wrong, investigators can reconstruct what happened, understand why, and prevent future incidents.

Logs function similarly in computing:

  • They don't guarantee safety: logging doesn't stop attacks
  • They enable reconstruction: what happened, in what order
  • They allow accountability: who did what, and when
  • They turn mystery into investigation: evidence replaces speculation

But a black box is only useful if it records the right signals, remains intact, and can be trusted. A corrupted or incomplete black box is worse than useless — it provides false confidence.

Observability is broader than logging: it's your ability to infer system behavior from all available signals — logs, metrics, traces, alerts, state changes. Security depends on answering: What happened? When? Who triggered it? What changed? How confident are we in these answers?

Mental model: logs are the system's black box. They don't prevent incidents, but without them, you can't investigate, learn, or prove anything.

1) What Logs Are (and What They Are Not)

Logs are records of events emitted by software. They are not reality — they are claims about reality. This distinction is crucial for security reasoning.

A log entry claims:

  • Something occurred: according to some software component
  • At some time: according to the system clock at that moment
  • In some context: a process, user, session, or request identifier
  • With some outcome: success, failure, or specific result

The critical security question becomes: Who controls the claim?

If the software making the claim is compromised, the log may be a lie. If the clock is manipulated, the timestamp may be wrong. If the context is spoofed, the attribution may be false. Logs are evidence, but evidence can be fabricated, altered, or destroyed.

Key insight: logs are claims about events, not the events themselves. Trust in logs depends on trust in the systems that generate them.

2) Where Logs Come From: Layers of Evidence

Modern systems generate logs at multiple layers, each with different trust properties and coverage:

  • Application logs: "My service received request X from user Y" — most detailed but controlled by the application
  • System/service logs: "Daemon started/stopped/crashed" — OS-level visibility into service lifecycle
  • Authentication logs: "Login succeeded/failed for user Z" — critical for tracking access and intrusion attempts
  • Network logs: "Connection from IP A to port B" — captures traffic patterns regardless of application behavior
  • Audit/security logs: "Permission changed on file X" — tracks sensitive operations for compliance and forensics
  • Kernel/hardware logs: "Memory error detected" — lowest level, hardest to tamper with

Each layer has different trust properties. Application logs are detailed but easily manipulated by a compromised application. Kernel logs are harder to fake but less detailed about application behavior.

Attackers will target the layer they can control or silence. A sophisticated attacker who compromises an application may alter its logs while kernel-level evidence remains intact.

Key insight: defense in depth applies to logging too. Multiple layers of logs make it harder for attackers to erase all evidence.

3) Why Attackers Target Logging

In many real compromises, attackers do not simply try to "avoid leaving evidence." They actively work to control it:

  • Disable: turn off logging services or reduce verbosity settings — future actions won't be recorded
  • Blind: overload the system with noise so real events are lost in the flood of meaningless entries
  • Erase: delete local log files — if logs aren't backed up remotely, they're gone forever
  • Distort: modify timestamps or rewrite entries — create false timelines or alibis
  • Impersonate: make malicious activity look like routine automation — hide in plain sight among legitimate scheduled tasks

Weeks 6–7 should make this clear: background services and scheduled tasks are ideal "cover stories." If attackers can make their activity look like normal cron jobs or service operations, defenders may never notice.

The first thing sophisticated attackers do after gaining access is assess what logging exists and how to neutralize it. Logging is a threat to them — so they treat it as a target.

Key insight: attackers don't just avoid detection — they actively work to undermine the evidence systems that could expose them.

4) Time, Ordering, and Confidence

Logs are strongly dependent on time. But time in computing is fragile:

  • Clocks drift: without synchronization, systems diverge by seconds or minutes
  • Systems disagree: different machines may have different clock settings
  • Attackers manipulate: timestamps can be changed before or after logging
  • Timezones confuse: UTC vs local time creates reconstruction errors

Therefore investigations rely not only on timestamps, but also on:

  • Ordering: what clearly happened before/after, even if exact times are uncertain
  • Correlation: matching signals across multiple sources — network logs, application logs, authentication logs
  • Consistency: whether multiple independent logs agree on what happened
  • Sequence numbers: log entries numbered in order, independent of timestamps

Evidence becomes stronger when it is duplicated across independent systems. If your application logs, network firewall logs, and authentication server logs all agree that user X connected at time T, you have corroboration. If only one source says it, you have a claim.

Key insight: timestamps matter, but corroboration across independent sources matters more. Attackers can forge one log; forging many synchronized logs is harder.

5) Centralisation: The "Outside the Host" Principle

A log stored only on the compromised machine is fragile evidence. If attackers control the host, they can often control the record. Local logs are the first thing sophisticated attackers delete.

A powerful defensive principle is:

Store evidence outside the thing you are investigating.

This is why organisations centralise logs:

  • Forward logs in real-time: to a SIEM or log aggregator the attacker can't reach
  • Use append-only storage: write-once media or immutable cloud storage
  • Separate access controls: system admins shouldn't be able to delete security logs
  • Monitor the logging pipeline: alert if log forwarding stops unexpectedly

Centralisation is not about convenience — it is about integrity. Logs that exist only on the compromised system are logs the attacker controls.

Modern observability extends beyond logs to include metrics (CPU, memory, network), traces (request paths through services), state changes (new users, new tasks), and alerts (thresholds and anomaly detection). Attackers can hide from one signal, but hiding from multiple independent signals is exponentially harder.

Key insight: the value of a log is inversely proportional to the attacker's ability to modify or delete it. Remote, immutable storage is essential.

Real-World Context: Logging in Security Incidents

The importance of logging becomes clear in real-world incidents:

SolarWinds Attack (2020): The attackers specifically targeted logging and monitoring systems. They disabled security tools, avoided noisy operations, and timed their activity to blend with normal traffic. Organisations with centralised, immutable logging were better able to reconstruct the attack timeline. Those relying on local logs often found them deleted or corrupted.

Log4j Vulnerability (2021): CVE-2021-44228 turned logging itself into an attack vector. The Log4j library — used to create log entries — contained a vulnerability that allowed remote code execution. The irony: the very system meant to record events became the entry point for compromise. This demonstrated that logging infrastructure is part of the attack surface.

Ransomware Operations: Modern ransomware routinely targets backup systems and log servers before encrypting primary data. Attackers understand that defenders need logs for incident response — so they destroy them first. Organisations with off-site, immutable log storage recover faster.

A defender who cannot reconstruct events is not defending — they are guessing. Logs are claims about events, not events themselves, but they are often the only evidence available after a compromise.

Common thread: in each case, logging was either a critical defensive asset or a specific target. Attackers understand the value of evidence — do you?

Guided Lab: Exploring System Evidence

This lab focuses on discovery and analysis. You will examine how your system records events and evaluate the reliability of that evidence.

Lab Objective

Locate and examine logs from multiple sources on your system. Understand what each log records, how timestamps work, and what limitations exist.

Environment

Step 1: Locate Log Sources

On Linux:

ls -la /var/log/                    # List available log files
journalctl --list-boots             # List boot sessions (systemd)
cat /var/log/syslog | head -50      # View recent system logs (Debian/Ubuntu)
cat /var/log/messages | head -50    # View recent system logs (RHEL/CentOS)

On Windows: Open Event Viewer (eventvwr.msc) and browse Windows Logs

Observe: how many different log sources exist? What types of events does each record?

Step 2: Examine Log Structure

Pick one log source and analyze its entries:

On Linux:

journalctl -n 20 --no-pager         # Recent 20 entries with full detail
journalctl -u ssh                   # Logs for specific service

On Windows: Click any event → Details tab → XML View

Step 3: Test Log Generation

Generate an event and find it in the logs:

On Linux:

logger "CSY102 test event from $(whoami)"    # Write to syslog
journalctl -n 5                              # Find your entry

On Windows: Create a failed login attempt, then find it in Security logs

Observe: how quickly did the event appear? What details were captured?

Reflection (mandatory)

  1. If an attacker had root/admin access, which logs could they modify or delete?
  2. What events on your system are NOT being logged that probably should be?
  3. How would you detect if logging itself had been tampered with?

Lab: Logs as Evidence (Not Truth)

Goal: observe how systems record events, compare logs from different layers, and reason about their reliability as evidence.

Choose ONE path (Linux or Windows). Both are valid.

Linux Path (safe commands)

  1. List available logs and identify at least three different log files:
    ls -la /var/log/
    file /var/log/*
  2. Examine authentication logs specifically:
    sudo cat /var/log/auth.log | tail -30     # Debian/Ubuntu
    sudo cat /var/log/secure | tail -30       # RHEL/CentOS
  3. Compare timestamps across different logs:
    head -5 /var/log/syslog
    head -5 /var/log/auth.log
    Note: are they synchronized? Same timezone?
  4. Check who can modify these logs:
    ls -la /var/log/auth.log
    ls -la /var/log/syslog
  5. Concept question: If this system were compromised and the attacker had root, which logs could they alter? What would remain as evidence?

Windows Path (built-in tools)

  1. Open Event Viewer (eventvwr.msc) and expand Windows Logs.
  2. Examine each log category and record what it contains:
    • Application: software events and errors
    • Security: authentication, authorization, audit events
    • System: OS and driver events
  3. Find a recent Security event (login or logout) and record:
    • Event ID (e.g., 4624 = successful login, 4625 = failed login)
    • Timestamp and timezone
    • Account name and domain
    • Logon type (interactive, network, service)
  4. Check log properties: Right-click Security → Properties. Note maximum size and retention policy.
  5. Concept question: Why does Windows separate logs into different channels? What's the security benefit of the Security log requiring special permissions?

Deliverable (submit):

Checkpoint Questions

  1. Explain the difference between an event and a log entry. Why is this distinction important for security?
  2. Why are logs vulnerable once an attacker controls the host? What can they do to the evidence?
  3. What is the security value of having multiple independent sources of logs? How does correlation strengthen evidence?
  4. How does time (timestamps) both help and hinder investigations? What problems can arise?
  5. How does Week 9's focus on evidence connect to Week 6's services, Week 7's scheduling, and Week 8's network exposure?

Week 09 Outcome Check

By the end of this week, you should be able to explain:

Next week: Updates, patching, and supply chain trust — how systems evolve safely, how they become compromised through "legitimate" channels, and why update mechanisms are both vital and dangerous.

🎯 Hands-On Labs (Free & Essential)

Practice log analysis and evidence collection before moving to reading resources.

🎮 TryHackMe: Intro to SOC

What you'll do: Explore alert triage and how logs support investigations.
Why it matters: Observability only works when logs are interpreted correctly.
Time estimate: 1-1.5 hours

Start TryHackMe Intro to SOC →

📝 Lab Exercise: Log Review Checklist

Task: Review `auth.log` or Windows Event Logs and identify five failed login events.
Deliverable: Timestamp list + one hypothesis about attacker behavior.
Why it matters: Logs are only useful if you can extract patterns and timelines.
Time estimate: 45-60 minutes

🏁 PicoCTF Practice: Forensics (Log Artifacts)

What you'll do: Analyze basic artifacts to extract evidence and timelines.
Why it matters: Forensic thinking strengthens log interpretation.
Time estimate: 1-2 hours

Start PicoCTF Forensics →

🧠 Lab: Memory Forensics Mini-Triage

What you'll do: Use Volatility to list running processes from a sample memory image.
Why it matters: Memory captures reveal evidence that logs miss (injected code, hidden processes).
Time estimate: 2-3 hours

Start Memory Samples →

💡 Lab Tip: Correlate at least two data sources (auth logs + system logs) to strengthen conclusions.

🛡️ Secure Configuration & Log Integrity

Logs are only useful if they are complete and trustworthy. Secure configuration protects log integrity and ensures evidence cannot be silently altered.

Logging hardening checklist:
- Enable authentication and security auditing
- Protect log files with strict permissions
- Centralize logs to a separate system
- Enforce time synchronization (NTP)
- Alert on log clearing or tampering

📚 Building on CSY101 Week-13: Threat model attacker log-evasion techniques. CSY204: Forensics relies on preserved logs and memory evidence.

Resources

Mark the required resources as complete to unlock the Week completion button.

Verified Resources & Videos

Logs do not make systems secure. They make systems explainable. Security without explainability is belief, not defence. A defender who cannot reconstruct events is guessing, not investigating.

Weekly Reflection

Reflection Prompt (200-300 words):

Imagine you are investigating a potential security incident on a Linux or Windows system. The system administrator reports "something strange" but can't be specific. You have access to the system's logs.

Describe your approach to using logs as evidence:

Connect your investigation approach to this week's concept that "logs are claims about reality, not reality itself." How does this understanding change how you interpret log evidence?

A strong response will describe a systematic approach, name specific log sources and event types, acknowledge the limitations of log evidence, and demonstrate understanding of why corroboration and integrity matter.

← Previous: Week 08 Next: Week 10 →

Week 09 Quiz

Test your understanding of the weekly concepts.

Format: 10 multiple-choice questions. Passing score: 70%. Time: Untimed.

Take Quiz