Apply hypothesis-driven hunting before moving to reading resources.
Threat Intelligence
Track your progress through this week's content
Mental Model
"Assume breach, then prove yourself wrong. Alerts find known threats; hunting finds the threats that
evade your alerts."
— Threat Hunting Principle
Traditional security waits for alerts. But sophisticated adversaries specifically
design their operations to avoid triggering alerts—they use legitimate tools,
blend into normal traffic, and move slowly to stay under detection thresholds.
Threat hunting flips the paradigm: instead of waiting, hunters proactively
search for evidence of compromise that automated systems miss.
Learning Outcomes
By the end of this week, you will be able to:
LO1: Define threat hunting and distinguish it from reactive detection and
monitoring
LO3: Identify data sources and telemetry required for effective hunting
LO4: Apply hunting techniques including stacking, clustering, and behavioral
analysis
LO5: Execute systematic hunts and convert findings into detection improvements
1. What Is Threat Hunting?
Threat hunting is the proactive, hypothesis-driven search for threats that
have evaded existing security controls. Unlike monitoring (which waits for
alerts) or incident response (which reacts to confirmed incidents), hunting
actively seeks evidence of compromise before alerts fire.
Hunting vs. Monitoring vs. Incident Response
┌─────────────────────────────────────────────────────────────────┐
│ SECURITY OPERATIONS PARADIGMS COMPARED │
├─────────────────────────────────────────────────────────────────┤
│ │
│ MONITORING (Reactive - Alert-Driven) │
│ ──────────────────────────────────── │
│ • Waits for alerts to trigger │
│ • Rule-based detection │
│ • Automated at scale │
│ • Finds KNOWN threats │
│ • High volume, low touch │
│ │
│ INCIDENT RESPONSE (Reactive - Event-Driven) │
│ ─────────────────────────────────────────── │
│ • Responds to confirmed incidents │
│ • Investigation and containment │
│ • Human-intensive │
│ • Finds CONFIRMED threats │
│ • Low volume, high touch │
│ │
│ THREAT HUNTING (Proactive - Hypothesis-Driven) │
│ ────────────────────────────────────────────── │
│ • Actively seeks evidence │
│ • Hypothesis-based exploration │
│ • Human creativity + data analysis │
│ • Finds UNKNOWN threats │
│ • Medium volume, high expertise │
│ │
│ ═══════════════════════════════════════════════════════════ │
│ All three are necessary. They complement, not replace. │
│ ═══════════════════════════════════════════════════════════ │
│ │
└─────────────────────────────────────────────────────────────────┘
The Hunting Maturity Model (HMM)
Organizations progress through maturity levels in their hunting capabilities:
Key Insight: You can't hunt what you can't see. Data
collection and retention are prerequisites—if you don't have endpoint
telemetry, you can't hunt for process injection. If you don't retain
logs for 90 days, you can't find slow-moving adversaries.
2. Hypothesis-Driven Hunting
Effective hunts start with a hypothesis—a testable statement about adversary
behavior that you seek to prove or disprove. Without hypotheses, hunting
becomes aimless log browsing.
Hypothesis Structure
┌─────────────────────────────────────────────────────────────────┐
│ HUNTING HYPOTHESIS TEMPLATE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ "If [THREAT/TECHNIQUE] is present in our environment, │
│ we would expect to see [OBSERVABLE EVIDENCE] │
│ in [DATA SOURCE]." │
│ │
│ ─────────────────────────────────────────────────────────── │
│ │
│ EXAMPLE HYPOTHESES: │
│ │
│ 1. Credential Theft │
│ "If attackers are using Mimikatz in our environment, │
│ we would expect to see lsass.exe memory access │
│ from unusual processes in our EDR telemetry." │
│ │
│ 2. Lateral Movement │
│ "If attackers are moving laterally via PsExec, │
│ we would expect to see PSEXESVC.exe service creation │
│ in Windows event logs on multiple systems." │
│ │
│ 3. Data Exfiltration │
│ "If attackers are staging data for exfiltration, │
│ we would expect to see large archive file creation │
│ in unusual directories in our file monitoring." │
│ │
│ 4. Persistence │
│ "If attackers have established scheduled task persistence, │
│ we would expect to see tasks created by non-admin │
│ processes in Windows Event ID 4698." │
│ │
└─────────────────────────────────────────────────────────────────┘
Hypothesis Sources
Where do hunting hypotheses come from?
Source
Description
Example
Threat Intelligence
TTPs from threat reports about actors targeting your industry
"APT29 uses WMI for persistence—let's hunt for suspicious WMI subscriptions"
MITRE ATT&CK
Techniques you haven't validated detection for
"We have no detection for T1055 Process Injection—let's hunt for indicators"
Incidents/Near-Misses
Techniques seen in your environment or industry peers
"Our peer was hit via malicious Office macros—let's hunt for similar activity"
Detection Gaps
Known blind spots in your monitoring
"We don't monitor PowerShell on workstations—let's hunt there"
Anomaly Investigation
Unusual patterns noticed during other work
"I noticed unusual DNS queries—let's hunt for DNS tunneling"
Red Team Results
Techniques that succeeded in assessments
"Red team used Kerberoasting—let's hunt for evidence in production"
Key Insight: A hunt without a hypothesis is just browsing
logs. Hypotheses focus effort, make hunts measurable, and ensure you're
looking for threats that matter to your organization.
3. Hunting Techniques
Multiple analytical approaches help find hidden threats. Effective hunters
combine techniques based on available data and hypothesis requirements.
Technique 1: IOC Searching
Search for known-bad indicators from threat intelligence:
# IOC-Based Searching Examples (Splunk SPL)
# Search for known malicious file hashes
index=edr file_hash IN (
"a1b2c3d4e5f6...",
"b2c3d4e5f6a1...",
"c3d4e5f6a1b2..."
)
# Search for known C2 domains
index=dns query IN (
"evil-domain.com",
"malware-c2.net",
"*.badactor.org"
)
# Search for known attacker IPs
index=firewall dest_ip IN (
"192.168.1.100",
"10.0.0.50"
)
| stats count by src_ip, dest_ip, dest_port
# Limitations of IOC searching:
# - Only finds KNOWN threats
# - IOCs change frequently (especially IPs/domains)
# - Reactive rather than proactive
# - Best for: validating intel, checking for known campaigns
Technique 2: Stacking (Frequency Analysis)
Find rare events that might indicate malicious activity. Principle:
attackers do unusual things that stand out statistically.
# Stacking Examples - Finding Rare Events
# Rare processes across environment
index=edr event_type=process_start
| stats count by process_name
| where count < 5
| sort count
# Rare outbound destinations
index=firewall direction=outbound action=allow
| stats count dc(src_ip) as unique_sources by dest_ip
| where count < 10 AND unique_sources < 3
| sort count
# Rare scheduled tasks (potential persistence)
index=windows EventCode=4698
| stats count by TaskName
| where count = 1
# Rare services created
index=windows EventCode=7045
| stats count by ServiceName
| where count < 3
# Rare parent-child process relationships
index=edr event_type=process_start
| stats count by parent_process_name, process_name
| where count < 5
| sort count
# IMPORTANT: Rare ≠ Malicious
# Stacking identifies candidates for investigation
# Human analysis determines if rare = suspicious
Technique 3: Clustering (Pattern Grouping)
Group similar activities and look for anomalies within clusters:
# Clustering Examples - Finding Anomalies in Groups
# Group processes by parent, find unusual children
index=edr event_type=process_start parent_process_name="explorer.exe"
| stats count values(process_name) as children by host
| where count > 20
| mvexpand children
| stats count by children
| where count < 5
# Group network connections by process
index=edr event_type=network_connection
| stats sum(bytes_out) as total_bytes
dc(dest_ip) as unique_destinations
by process_name
| where unique_destinations > 50 OR total_bytes > 1000000000
| sort -total_bytes
# Group authentications by source IP
index=windows EventCode=4624
| stats count dc(TargetUserName) as unique_users by IpAddress
| where unique_users > 10
| sort -unique_users
# Look for: Outliers in clusters
# Unexpected groupings
# Rare combinations within normal groups
Technique 4: Behavioral Analysis
Look for suspicious behavior patterns regardless of specific IOCs:
# Behavioral Analysis Examples
# PowerShell with encoded commands (common evasion)
index=windows EventCode=4104
| search ScriptBlockText="*-enc*" OR ScriptBlockText="*-encoded*"
OR ScriptBlockText="*FromBase64*"
| table _time, ComputerName, ScriptBlockText
# Processes running from suspicious directories
index=edr event_type=process_start
| where match(process_path, "(?i)\\\\(temp|tmp|appdata\\\\local\\\\temp)\\\\")
| stats count by process_name, process_path, host
# Admin tools from non-admin systems
index=edr process_name IN ("psexec.exe", "wmic.exe", "net.exe", "nltest.exe")
| where NOT match(host, "(?i)(admin|jump|bastion)")
| stats count by host, process_name, user
# Network connections from unusual processes
index=edr event_type=network_connection
| where process_name IN ("notepad.exe", "calc.exe", "mspaint.exe")
| stats count by process_name, dest_ip, dest_port
# LSASS access (credential theft indicator)
index=edr event_type=process_access target_process="lsass.exe"
| where source_process NOT IN ("csrss.exe", "services.exe", "svchost.exe")
| stats count by source_process, host
# Focus: What attackers DO, not what files they use
# Behavior persists even when tools change
Technique 5: Baseline Deviation
Compare current activity against established baselines:
# Baseline Deviation Examples
# Processes that appeared recently (not in baseline)
index=edr event_type=process_start earliest=-24h
| stats count by process_name
| search NOT [
search index=edr event_type=process_start earliest=-30d latest=-7d
| stats count by process_name
| fields process_name
]
# Users authenticating from new locations
index=windows EventCode=4624 earliest=-24h
| stats values(IpAddress) as recent_ips by TargetUserName
| search NOT [
search index=windows EventCode=4624 earliest=-90d latest=-7d
| stats values(IpAddress) as baseline_ips by TargetUserName
| fields TargetUserName, baseline_ips
]
# Unusual outbound data volumes
index=firewall direction=outbound earliest=-24h
| stats sum(bytes) as daily_bytes by src_ip
| join src_ip [
search index=firewall direction=outbound earliest=-30d latest=-1d
| stats avg(bytes) as avg_daily by src_ip
]
| where daily_bytes > (avg_daily * 10)
# Requires: Good baseline data (30-90 days)
# Understanding of "normal" in your environment
4. Data Sources for Hunting
Different data sources enable different types of hunts. Understanding
what each source provides helps target hypotheses appropriately.
Systematic execution and documentation ensure hunts are repeatable,
measurable, and valuable beyond finding individual threats.
Hunt Execution Workflow
┌─────────────────────────────────────────────────────────────────┐
│ HUNT EXECUTION WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ PHASE 1: PREPARATION │
│ ───────────────────── │
│ □ Define hypothesis clearly │
│ □ Identify required data sources │
│ □ Verify data availability and retention │
│ □ Determine time scope (last 7 days? 30 days?) │
│ □ Plan initial queries │
│ │
│ PHASE 2: EXECUTION │
│ ────────────────── │
│ □ Run initial broad query │
│ □ Review result volume—too many? Refine. │
│ □ Apply filters to reduce false positives │
│ □ Stack/cluster results to find outliers │
│ □ Document interesting findings │
│ │
│ PHASE 3: ANALYSIS │
│ ───────────────── │
│ □ For each candidate: Is this expected? Suspicious? Unknown? │
│ □ Gather additional context for suspicious items │
│ □ Correlate across data sources │
│ □ Determine: Finding, No Finding, or Need More Data │
│ │
│ PHASE 4: RESPONSE (if finding) │
│ ────────────────────────────── │
│ □ Escalate to incident response if confirmed threat │
│ □ Scope the compromise │
│ □ Preserve evidence │
│ □ Coordinate containment │
│ │
│ PHASE 5: IMPROVEMENT │
│ ───────────────────── │
│ □ Create/update detection rule for finding │
│ □ Document hunt for future reference │
│ □ Share learnings with team │
│ □ Identify data gaps encountered │
│ □ Queue related hypotheses │
│ │
└─────────────────────────────────────────────────────────────────┘
Hunt Documentation Template
THREAT HUNT DOCUMENTATION
═════════════════════════
HUNT METADATA
─────────────
Hunt ID: TH-2025-042
Hunter: [Name]
Date: 2025-01-15
Duration: 4 hours
Status: Completed - No Finding
HYPOTHESIS
──────────
Statement: "If attackers are using Kerberoasting in our environment,
we would expect to see TGS requests for SPNs associated
with service accounts from workstations."
Source: Red team report identified Kerberoasting as viable
ATT&CK: T1558.003 - Kerberoasting
Priority: High (credential access)
DATA SOURCES
────────────
Primary: Domain Controller Security Logs (Event 4769)
Secondary: EDR process telemetry
Time Range: Last 30 days
Availability: ✓ All required data available
QUERIES EXECUTED
────────────────
Query 1 - Baseline TGS requests:
index=windows EventCode=4769
| stats count by ServiceName, ClientAddress
| where NOT match(ClientAddress, "^10\\.10\\.(1|2)\\.") [servers]
Query 2 - Anomalous encryption types:
index=windows EventCode=4769 TicketEncryptionType=0x17
| stats count by ServiceName, AccountName, ClientAddress
Query 3 - High-volume requesters:
index=windows EventCode=4769
| stats dc(ServiceName) as unique_spns by ClientAddress
| where unique_spns > 10
FINDINGS
────────
Result: No malicious activity identified
Observations:
- 3 workstations showed >10 unique SPN requests
- Investigated: All were IT admin workstations running
legitimate management tools
- No anomalous encryption downgrade attempts
- No requests from unexpected sources
False Positives Identified:
- ServiceNow integration queries multiple SPNs (expected)
- Backup software service account (documented)
DETECTION IMPROVEMENTS
──────────────────────
□ Created alert for: TGS requests with RC4 encryption (0x17)
from non-admin workstations
□ Added exclusions for known legitimate high-volume requesters
□ Documented baseline for future comparison
FOLLOW-UP ACTIONS
─────────────────
□ Schedule re-hunt in 90 days
□ Expand hypothesis to include AS-REP roasting
□ Verify Kerberos logging on all DCs
Individual hunts provide value; a hunting program provides sustained,
improving capability.
Program Components
┌─────────────────────────────────────────────────────────────────┐
│ HUNTING PROGRAM ELEMENTS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ PEOPLE │
│ ────── │
│ • Dedicated hunters (not just spare SOC time) │
│ • Training and skill development │
│ • Career path for hunters │
│ • Collaboration with threat intel, IR, red team │
│ │
│ PROCESS │
│ ─────── │
│ • Regular hunting cadence (weekly/monthly) │
│ • Hypothesis backlog and prioritization │
│ • Documentation standards │
│ • Metrics and reporting │
│ • Feedback loop to detection engineering │
│ │
│ TECHNOLOGY │
│ ────────── │
│ • Hunting platform (SIEM, EDR, data lake) │
│ • Query tools and automation │
│ • Hypothesis and hunt tracking │
│ • Collaboration and knowledge sharing │
│ │
│ INTELLIGENCE │
│ ──────────── │
│ • Threat intel integration │
│ • ATT&CK coverage mapping │
│ • Industry threat awareness │
│ • Internal incident learnings │
│ │
└─────────────────────────────────────────────────────────────────┘
Hunting Program Metrics
Metric Category
Specific Metrics
Why It Matters
Activity
Hunts completed per month, hours spent hunting
Shows program is active and resourced
Coverage
ATT&CK techniques hunted, % of environment covered
Shows breadth of hunting
Findings
Threats found, incidents initiated from hunts
Demonstrates value (but low numbers okay)
Improvements
Detection rules created, gaps identified
Shows hunting improves overall security
Efficiency
Time to complete hunt, false positive rate
Shows program is maturing
Important: "Threats found" is not the primary success
metric. A hunt that finds nothing but results in better detection rules
is successful. Hunting value includes: threats found, detection improvements,
visibility gaps identified, and increased defender knowledge.
Week 09 Quiz
Test your understanding of Threat Hunting Fundamentals.
Hunting improves monitoring—findings become new detection rules
Together they provide defense against both commodity and advanced threats
Question 2
Create a hunting hypothesis for detecting potential data exfiltration
in a corporate environment. Include the hypothesis statement, data
sources needed, and initial query approach.
Reveal Answer
Hypothesis:
"If attackers are staging and exfiltrating data from our environment,
we would expect to see unusual archive file creation followed by
large outbound data transfers to uncommon destinations."
Firewall/proxy logs (outbound bytes by destination)
Network flow data (volume patterns)
Initial queries:
Find archive file creation in unusual locations (not user Downloads)
Stack outbound destinations by total bytes transferred
Correlate: systems with recent archive creation + large outbound transfers
Filter out known backup and file sharing destinations
Question 3
Explain the "stacking" technique in threat hunting. When is it most
effective, and what are its limitations?
Reveal Answer
Stacking technique:
Stacking (frequency analysis) counts occurrences of events and identifies
rare outliers. Based on the principle that attackers do unusual things
that stand out statistically in large datasets.
Most effective when:
Large, relatively homogeneous environment (rare really is rare)
Good baseline data for comparison
Hunting for activities that should be uncommon (new processes, unusual destinations)
Initial triage to identify candidates for deeper investigation
Limitations:
Rare ≠ malicious—many false positives require human analysis
Attackers using common tools may not appear rare
Doesn't work well in diverse environments where everything is "rare"
Requires sufficient data volume for statistics to be meaningful
Question 4
A hunt completes with no threats found. The hypothesis was well-formed
and the data was available. Is this hunt a success or failure? What
should happen next?
Reveal Answer
This hunt is a SUCCESS, not a failure.
Why it's successful:
Provides confidence that the specific threat isn't present (currently)
Validated that detection capability exists for this technique
Hunter gained expertise in this attack pattern
Documentation enables future re-hunting
What should happen next:
Document the hunt thoroughly (queries, observations, baseline)
Create or validate detection rule for ongoing monitoring
Schedule re-hunt (quarterly?) to check again
Refine hypothesis—are there related techniques to hunt?
Share learnings with team
💡 Key insight
Measuring hunting success solely by threats
found creates perverse incentives and undervalues the program.
Question 5
You want to hunt for Kerberoasting attacks (T1558.003) in your
environment. What data sources do you need, and what indicators
would you look for?
Apply hypothesis-driven hunting before moving to reading resources.
🎮 TryHackMe: Threat Hunting
What you'll do: Build and test hunting hypotheses using real telemetry.
Why it matters: Practicing the hunt loop is the fastest way to build intuition.
Time estimate: 2-3 hours
What you'll do: Run hunt queries over a public dataset and document findings.
Why it matters: You learn to pivot, refine, and validate at scale.
Time estimate: 2-3 hours
Task: Translate one hunt finding (or hypothesis) into a Sigma rule.
Why it matters: Hunting value compounds when you ship detections.
Time estimate: 60-90 minutes
What you'll do: Build a hunt hypothesis for a poisoned update scenario.
Why it matters: Supply chain hunts require different telemetry and pivots.
Time estimate: 60-90 minutes
💡 Lab Tip: Document false positives early so you can refine hypotheses quickly.
🧩 Supply Chain Hunt Patterns
Hunting for supply chain compromise focuses on update behavior,
build pipeline artifacts, and unexpected signer changes.
Supply chain hunt signals:
- New or rare update signer
- Build pipeline account anomalies
- Unexpected dependency version changes
- Update traffic to unusual domains
📚 Building on CSY101 Week-13: Threat model update
paths and trust relationships.
Resources
Required
Sqrrl Threat Hunting Reference (Archive)
Foundational threat hunting methodology from the team that
pioneered modern hunting practices.
Reflect on the relationship between threat hunting and automated
detection. How does hunting complement (rather than replace)
traditional security monitoring? What skills and mindset does
effective hunting require beyond technical query abilities?
Consider: How would you justify a threat hunting program to
leadership if hunts often come back "clean" with no findings?
What value does hunting provide beyond finding active threats?