Opening Framing
Intelligence doesn't happen by accident. It follows a structured process that transforms requirements into actionable knowledge. Without this process, analysts chase interesting threats rather than relevant ones, produce reports no one reads, and fail to answer the questions stakeholders actually have.
The intelligence lifecycle provides a framework for systematic intelligence production. It begins with understanding what decisions need support, continues through collection and analysis, and culminates in dissemination that drives action. The cycle then repeats, incorporating feedback to improve future intelligence.
This week covers the complete intelligence lifecycle, with special focus on requirements development—the most critical and often neglected phase. You'll learn to translate organizational needs into intelligence requirements, design collection strategies, and ensure intelligence production aligns with stakeholder needs.
Key insight: Intelligence without requirements is research. Requirements without feedback is guessing. The lifecycle connects them into continuous improvement.
1) The Intelligence Lifecycle
The intelligence lifecycle provides a systematic approach to producing actionable intelligence. While variations exist, most frameworks share common phases:
Intelligence Lifecycle Phases:
┌─────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ │
│ │ DIRECTION │ ← What do we need to know? │
│ │ (Planning & │ │
│ │ Requirements)│ │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ COLLECTION │ ← Gather relevant data │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ PROCESSING │ ← Normalize, correlate, enrich │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ ANALYSIS │ ← Evaluate, interpret, assess │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │DISSEMINATION │ ← Deliver to consumers │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ FEEDBACK │ ← Evaluate effectiveness │
│ └──────┬───────┘ │
│ │ │
│ └─────────────────────────────────────────────┐ │
│ │ │
│ ┌──────────────┐ │ │
│ │ DIRECTION │ ◄───────────────────────────────────┘ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Phase Details:
1. DIRECTION (Planning & Requirements):
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Identify stakeholder needs │
│ - Define intelligence requirements │
│ - Prioritize collection efforts │
│ - Allocate resources │
│ │
│ Outputs: │
│ - Intelligence Requirements Document │
│ - Collection Plan │
│ - Priority Intelligence Requirements (PIRs) │
└─────────────────────────────────────────────────────────┘
2. COLLECTION:
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Gather data from identified sources │
│ - Monitor threat feeds and reports │
│ - Collect internal security data │
│ - Conduct OSINT gathering │
│ │
│ Outputs: │
│ - Raw data and reports │
│ - Indicator feeds │
│ - Source documentation │
└─────────────────────────────────────────────────────────┘
3. PROCESSING:
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Normalize data formats │
│ - Deduplicate indicators │
│ - Enrich with context │
│ - Correlate across sources │
│ │
│ Outputs: │
│ - Structured data in TIP │
│ - Enriched indicators │
│ - Correlated datasets │
└─────────────────────────────────────────────────────────┘
4. ANALYSIS:
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Evaluate source reliability │
│ - Assess information credibility │
│ - Identify patterns and trends │
│ - Develop assessments and judgments │
│ - Answer intelligence requirements │
│ │
│ Outputs: │
│ - Intelligence assessments │
│ - Threat profiles │
│ - Campaign analysis │
│ - Trend reports │
└─────────────────────────────────────────────────────────┘
5. DISSEMINATION:
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Package for appropriate audience │
│ - Classify and mark appropriately │
│ - Deliver through appropriate channels │
│ - Ensure receipt and understanding │
│ │
│ Outputs: │
│ - Intelligence reports │
│ - Briefings │
│ - IOC feeds │
│ - Alerts and warnings │
└─────────────────────────────────────────────────────────┘
6. FEEDBACK:
┌─────────────────────────────────────────────────────────┐
│ Activities: │
│ - Collect consumer feedback │
│ - Assess intelligence accuracy │
│ - Evaluate timeliness and relevance │
│ - Identify improvement opportunities │
│ │
│ Outputs: │
│ - Consumer surveys │
│ - Accuracy assessments │
│ - Process improvements │
│ - Updated requirements │
└─────────────────────────────────────────────────────────┘
Key insight: The lifecycle is continuous, not linear. Feedback informs new requirements, and analysis may reveal collection gaps requiring new direction.
2) Intelligence Requirements
Intelligence requirements define what the organization needs to know. Well-crafted requirements focus collection and analysis on relevant threats:
Requirements Hierarchy:
┌─────────────────────────────────────────────────────────────┐
│ PRIORITY INTELLIGENCE REQUIREMENTS (PIRs) │
│ │
│ Highest-priority questions requiring immediate attention │
│ Typically 3-5 questions approved by leadership │
│ │
│ Example: │
│ "What ransomware groups are actively targeting healthcare │
│ organizations, and what are their initial access methods?" │
├─────────────────────────────────────────────────────────────┤
│ SPECIFIC INTELLIGENCE REQUIREMENTS (SIRs) │
│ │
│ Detailed questions supporting PIRs │
│ Actionable and measurable │
│ │
│ Examples supporting above PIR: │
│ - Which ransomware groups targeted healthcare in Q1 2024? │
│ - What phishing lures are these groups using? │
│ - What vulnerabilities are being exploited for access? │
│ - What is the typical time from access to encryption? │
├─────────────────────────────────────────────────────────────┤
│ ESSENTIAL ELEMENTS OF INFORMATION (EEIs) │
│ │
│ Specific data points needed to answer SIRs │
│ Guides collection activities │
│ │
│ Examples: │
│ - Ransomware group names and aliases │
│ - Phishing email subjects and sender domains │
│ - Exploited CVE numbers │
│ - Attack timeline data │
└─────────────────────────────────────────────────────────────┘
Developing Requirements:
Requirements Development Process:
Step 1: Identify Stakeholders
┌─────────────────────────────────────────────────────────────┐
│ Who needs intelligence? │
│ │
│ - Executive leadership (strategic decisions) │
│ - Security management (program priorities) │
│ - SOC/IR teams (detection and response) │
│ - Vulnerability management (patch priorities) │
│ - Risk management (risk assessments) │
│ - Business units (specific concerns) │
└─────────────────────────────────────────────────────────────┘
Step 2: Conduct Stakeholder Interviews
┌─────────────────────────────────────────────────────────────┐
│ Questions to ask: │
│ │
│ - What decisions do you make that need threat context? │
│ - What keeps you up at night regarding security? │
│ - What threats are you most concerned about? │
│ - What information would help you do your job better? │
│ - How do you prefer to receive intelligence? │
│ - What timeframe do you need intelligence for? │
└─────────────────────────────────────────────────────────────┘
Step 3: Analyze Business Context
┌─────────────────────────────────────────────────────────────┐
│ Consider: │
│ │
│ - Industry vertical and typical threats │
│ - Geographic presence and regional threats │
│ - Technology stack and associated vulnerabilities │
│ - Regulatory requirements │
│ - Business partnerships and supply chain │
│ - Intellectual property and valuable data │
│ - Recent incidents and near-misses │
└─────────────────────────────────────────────────────────────┘
Step 4: Draft Requirements
┌─────────────────────────────────────────────────────────────┐
│ Good requirements are: │
│ │
│ S - Specific (clear, unambiguous) │
│ M - Measurable (can assess if answered) │
│ A - Achievable (realistically answerable) │
│ R - Relevant (matters to stakeholder) │
│ T - Time-bound (has deadline or cadence) │
└─────────────────────────────────────────────────────────────┘
Step 5: Prioritize and Validate
┌─────────────────────────────────────────────────────────────┐
│ Prioritization criteria: │
│ │
│ - Impact if not answered │
│ - Urgency of decision supported │
│ - Number of stakeholders served │
│ - Feasibility of collection │
│ │
│ Validate with stakeholders before finalizing │
└─────────────────────────────────────────────────────────────┘
Requirements Examples:
Example PIRs by Industry:
FINANCIAL SERVICES:
PIR-1: What threat actors are targeting financial institutions
for monetary theft, and what methods are they using?
PIR-2: What emerging fraud techniques should we prepare for?
PIR-3: Are there threats to our specific payment platforms?
HEALTHCARE:
PIR-1: What ransomware groups are targeting healthcare, and
how can we detect their initial access?
PIR-2: Are there threats to our medical device ecosystem?
PIR-3: What data theft campaigns target patient records?
MANUFACTURING:
PIR-1: What nation-state actors target our industry for IP theft?
PIR-2: Are there threats to our industrial control systems?
PIR-3: What supply chain compromises could affect our operations?
GOVERNMENT:
PIR-1: What APT groups are targeting government agencies, and
what are their objectives?
PIR-2: What influence operations target our organization?
PIR-3: Are there insider threat indicators we should monitor?
Example SIRs (Supporting Healthcare PIR-1):
SIR-1.1: Which specific ransomware families have encrypted
healthcare organizations in the past 90 days?
SIR-1.2: What initial access vectors did these attacks use?
SIR-1.3: What is the average dwell time before encryption?
SIR-1.4: What are known C2 infrastructure indicators?
SIR-1.5: What endpoint artifacts indicate pre-ransomware activity?
Key insight: Requirements should be reviewed quarterly at minimum. Threat landscapes change, and requirements must evolve to remain relevant.
3) Collection Management
Collection management ensures the right data is gathered from the right sources to answer intelligence requirements:
Collection Planning:
Collection Strategy Components:
1. SOURCE IDENTIFICATION
┌─────────────────────────────────────────────────────────────┐
│ For each requirement, identify: │
│ │
│ - What sources might have this information? │
│ - What sources do we have access to? │
│ - What gaps exist in our source coverage? │
│ - What new sources should we acquire? │
└─────────────────────────────────────────────────────────────┘
2. COLLECTION TASKING
┌─────────────────────────────────────────────────────────────┐
│ Assign collection tasks: │
│ │
│ - Who will collect from each source? │
│ - What specific data should be gathered? │
│ - What is the collection frequency? │
│ - How will data be delivered? │
└─────────────────────────────────────────────────────────────┘
3. RESOURCE ALLOCATION
┌─────────────────────────────────────────────────────────────┐
│ Balance resources: │
│ │
│ - Analyst time for collection activities │
│ - Budget for commercial sources │
│ - Technology for automated collection │
│ - Partnerships for shared collection │
└─────────────────────────────────────────────────────────────┘
Collection Source Matrix:
Mapping Sources to Requirements:
Example Matrix:
│ PIR-1 │ PIR-2 │ PIR-3 │
│Ransomware│Med Device│Data Theft│
────────────────────┼──────────┼──────────┼──────────┤
Internal SIEM │ ● │ ● │ ● │
Internal Incidents │ ● │ ○ │ ● │
Industry ISAC │ ● │ ● │ ● │
CISA Alerts │ ● │ ○ │ ○ │
Vendor Reports │ ● │ ● │ ● │
Commercial TIP │ ● │ ○ │ ● │
Dark Web Monitor │ ○ │ ○ │ ● │
OSINT │ ● │ ○ │ ● │
● = Primary source (high relevance)
○ = Secondary source (some relevance)
= Not applicable
Gap Analysis:
- PIR-2 (Medical Devices): Limited external sources
Action: Join ICS-CERT, engage device vendors
- PIR-3 (Data Theft): Dark web monitoring is secondary
Action: Evaluate commercial dark web service
Collection Disciplines:
Intelligence Collection Disciplines:
OSINT (Open Source Intelligence):
┌─────────────────────────────────────────────────────────────┐
│ Sources: Public websites, social media, news, research │
│ Strengths: Low cost, broad coverage, legal │
│ Weaknesses: Noise, verification challenges │
│ Tools: Maltego, Shodan, social media APIs │
└─────────────────────────────────────────────────────────────┘
SIGINT (Signals Intelligence) - Internal:
┌─────────────────────────────────────────────────────────────┐
│ Sources: Network traffic, DNS logs, proxy logs │
│ Strengths: High relevance, real-time │
│ Weaknesses: Volume, privacy considerations │
│ Tools: SIEM, NDR, DNS monitoring │
└─────────────────────────────────────────────────────────────┘
HUMINT (Human Intelligence):
┌─────────────────────────────────────────────────────────────┐
│ Sources: Industry contacts, ISACs, conferences │
│ Strengths: Context, early warning, trust relationships │
│ Weaknesses: Unscalable, subjective │
│ Methods: Networking, information sharing groups │
└─────────────────────────────────────────────────────────────┘
TECHINT (Technical Intelligence):
┌─────────────────────────────────────────────────────────────┐
│ Sources: Malware analysis, reverse engineering, forensics │
│ Strengths: Detailed technical insight │
│ Weaknesses: Requires expertise, time-consuming │
│ Tools: Sandbox, disassemblers, forensic tools │
└─────────────────────────────────────────────────────────────┘
Vendor/Commercial Intelligence:
┌─────────────────────────────────────────────────────────────┐
│ Sources: TIP platforms, threat feeds, research services │
│ Strengths: Processed, contextualized, timely │
│ Weaknesses: Cost, may not be specific to your org │
│ Examples: Recorded Future, Mandiant, CrowdStrike │
└─────────────────────────────────────────────────────────────┘
Key insight: Collection without requirements is hoarding. Every collection effort should trace to a specific intelligence need.
4) Analysis and Production
Analysis transforms collected data into intelligence that answers requirements and supports decisions:
Analysis Process:
Step 1: EVALUATE
┌─────────────────────────────────────────────────────────────┐
│ Assess source and information quality: │
│ │
│ Source Reliability: │
│ - Track record of accuracy │
│ - Access to information │
│ - Potential biases │
│ │
│ Information Credibility: │
│ - Consistency with other sources │
│ - Logical coherence │
│ - Verifiability │
└─────────────────────────────────────────────────────────────┘
Step 2: INTEGRATE
┌─────────────────────────────────────────────────────────────┐
│ Combine information from multiple sources: │
│ │
│ - Correlate indicators across sources │
│ - Identify patterns and relationships │
│ - Resolve conflicting information │
│ - Fill gaps with additional collection │
└─────────────────────────────────────────────────────────────┘
Step 3: INTERPRET
┌─────────────────────────────────────────────────────────────┐
│ Derive meaning from integrated data: │
│ │
│ - What does this mean for our organization? │
│ - What is the adversary trying to accomplish? │
│ - What is the likely course of action? │
│ - What are alternative explanations? │
└─────────────────────────────────────────────────────────────┘
Step 4: ASSESS
┌─────────────────────────────────────────────────────────────┐
│ Develop judgments and recommendations: │
│ │
│ - What is our confidence level? │
│ - What are the implications? │
│ - What actions should be taken? │
│ - What should we continue monitoring? │
└─────────────────────────────────────────────────────────────┘
Analytical Techniques:
Structured Analytic Techniques:
ANALYSIS OF COMPETING HYPOTHESES (ACH):
┌─────────────────────────────────────────────────────────────┐
│ Purpose: Evaluate multiple explanations objectively │
│ │
│ Process: │
│ 1. Identify all possible hypotheses │
│ 2. List evidence and arguments │
│ 3. Create matrix: hypotheses vs. evidence │
│ 4. Assess consistency (++, +, -, --, N/A) │
│ 5. Identify diagnostics (evidence that distinguishes) │
│ 6. Draw conclusions based on inconsistencies │
│ │
│ Example Application: │
│ H1: Attack by APT29 for espionage │
│ H2: Attack by cybercriminal for ransomware │
│ H3: Attack by insider for data theft │
│ │
│ Evidence: TTP analysis, targeting, timing, tools used │
└─────────────────────────────────────────────────────────────┘
KEY ASSUMPTIONS CHECK:
┌─────────────────────────────────────────────────────────────┐
│ Purpose: Identify and test underlying assumptions │
│ │
│ Process: │
│ 1. List assumptions in current analysis │
│ 2. Assess each assumption's validity │
│ 3. Consider impact if assumption is wrong │
│ 4. Identify how to test assumptions │
│ │
│ Example: │
│ Assumption: "Adversary will use same C2 infrastructure" │
│ Validity: Medium - adversaries do rotate infrastructure │
│ Impact if wrong: Miss detection of new campaign │
│ Test: Monitor for similar TTPs with different infrastructure│
└─────────────────────────────────────────────────────────────┘
RED TEAM ANALYSIS:
┌─────────────────────────────────────────────────────────────┐
│ Purpose: Challenge analysis from adversary perspective │
│ │
│ Process: │
│ 1. Adopt adversary viewpoint │
│ 2. Identify how adversary might counter our analysis │
│ 3. Consider adversary deception possibilities │
│ 4. Stress-test defensive recommendations │
└─────────────────────────────────────────────────────────────┘
Expressing Confidence:
Intelligence Confidence Levels:
ODNI (US Intelligence Community) Standard:
High Confidence:
- Based on high-quality information
- Multiple independent sources agree
- Strong logical basis
- Few alternative explanations
Moderate Confidence:
- Based on credibly sourced information
- Not fully corroborated
- Logical interpretation but gaps exist
- Some alternative explanations possible
Low Confidence:
- Based on limited or fragmentary information
- Cannot be corroborated
- Significant gaps in knowledge
- Multiple alternative explanations
Expressing Uncertainty:
Words of Estimative Probability:
┌────────────────────┬─────────────────────────────────────┐
│ Term │ Approximate Probability │
├────────────────────┼─────────────────────────────────────┤
│ Almost certain │ 90-99% │
│ Highly likely │ 80-90% │
│ Likely │ 60-80% │
│ Roughly even │ 40-60% │
│ Unlikely │ 20-40% │
│ Highly unlikely │ 10-20% │
│ Remote possibility │ 1-10% │
└────────────────────┴─────────────────────────────────────┘
Example Statement:
"We assess with moderate confidence that APT29 is likely
(60-80%) to target vaccine research organizations in the
next 90 days, based on recent targeting patterns and
geopolitical context."
Key insight: Stating confidence levels is essential. Consumers need to understand how much weight to give your assessments when making decisions.
5) Dissemination and Feedback
Intelligence must reach the right people in the right format at the right time to drive action:
Dissemination Principles:
AUDIENCE AWARENESS:
┌─────────────────────────────────────────────────────────────┐
│ Consider: │
│ - Technical sophistication │
│ - Decision-making authority │
│ - Time constraints │
│ - Preferred format │
│ - Security clearance/need-to-know │
└─────────────────────────────────────────────────────────────┘
TIMELINESS:
┌─────────────────────────────────────────────────────────────┐
│ Balance: │
│ - Speed vs. accuracy │
│ - Preliminary vs. final │
│ - Urgent alerts vs. scheduled reporting │
│ │
│ Rule: Imperfect intelligence now often beats │
│ perfect intelligence too late │
└─────────────────────────────────────────────────────────────┘
ACTIONABILITY:
┌─────────────────────────────────────────────────────────────┐
│ Include: │
│ - Clear recommendations │
│ - Specific actions to take │
│ - Indicators to implement │
│ - Questions answered │
└─────────────────────────────────────────────────────────────┘
Intelligence Products:
Common Intelligence Products:
TACTICAL PRODUCTS:
┌─────────────────────────────────────────────────────────────┐
│ IOC Reports │
│ - IP addresses, domains, hashes │
│ - Context for each indicator │
│ - Confidence and validity period │
│ - Detection guidance │
│ │
│ Audience: SOC, detection engineering │
│ Frequency: As needed, often daily │
└─────────────────────────────────────────────────────────────┘
OPERATIONAL PRODUCTS:
┌─────────────────────────────────────────────────────────────┐
│ Threat Actor Profiles │
│ - Actor overview and attribution │
│ - Motivation and objectives │
│ - TTPs mapped to ATT&CK │
│ - Infrastructure patterns │
│ - Detection and mitigation guidance │
│ │
│ Campaign Reports │
│ - Campaign timeline and scope │
│ - Targets and victimology │
│ - Attack chain analysis │
│ - IOCs and detection rules │
│ │
│ Audience: IR, threat hunting, security management │
│ Frequency: As campaigns identified │
└─────────────────────────────────────────────────────────────┘
STRATEGIC PRODUCTS:
┌─────────────────────────────────────────────────────────────┐
│ Threat Landscape Reports │
│ - Industry threat overview │
│ - Trend analysis │
│ - Emerging threats │
│ - Strategic recommendations │
│ │
│ Risk Assessments │
│ - Threat likelihood analysis │
│ - Potential impact assessment │
│ - Prioritized risk register │
│ │
│ Audience: Executives, board, security leadership │
│ Frequency: Quarterly, annually │
└─────────────────────────────────────────────────────────────┘
Feedback Collection:
Feedback Mechanisms:
FORMAL FEEDBACK:
┌─────────────────────────────────────────────────────────────┐
│ Consumer Surveys: │
│ - Was the intelligence useful? │
│ - Did it answer your questions? │
│ - Was it timely enough? │
│ - What additional information would help? │
│ - How did you use this intelligence? │
│ │
│ Effectiveness Reviews: │
│ - Did intelligence lead to action? │
│ - Were predictions accurate? │
│ - Did detections based on intel fire? │
│ - Were incidents prevented or detected? │
└─────────────────────────────────────────────────────────────┘
INFORMAL FEEDBACK:
┌─────────────────────────────────────────────────────────────┐
│ Regular Touchpoints: │
│ - Weekly meetings with key consumers │
│ - Participation in SOC/IR discussions │
│ - Post-incident debriefs │
│ - Informal conversations │
└─────────────────────────────────────────────────────────────┘
METRICS-BASED FEEDBACK:
┌─────────────────────────────────────────────────────────────┐
│ Track: │
│ - Report read rates │
│ - IOC match rates in SIEM │
│ - Detection rules triggered │
│ - Incidents prevented/detected │
│ - Time from intel to detection │
└─────────────────────────────────────────────────────────────┘
Using Feedback:
- Refine requirements based on gaps
- Adjust collection priorities
- Improve analysis techniques
- Modify product formats
- Enhance dissemination timing
Key insight: Feedback closes the loop. Without it, you're producing intelligence in a vacuum, uncertain if it helps anyone.
Real-World Context
Case Study: Healthcare Intelligence Requirements
A hospital system implemented a threat intelligence program starting with requirements development. Stakeholder interviews revealed: The CISO worried about ransomware disrupting patient care. The CIO was concerned about medical device vulnerabilities. The compliance officer needed to understand threats for risk assessments. The SOC wanted actionable indicators. From these needs, three PIRs emerged: ransomware threats to healthcare, medical device vulnerabilities, and healthcare data theft campaigns. Collection sources were mapped—ISAC membership proved most valuable, supplemented by CISA alerts and internal incident data. Monthly reports to leadership and weekly IOC feeds to SOC addressed different consumer needs. Quarterly requirements review ensured ongoing relevance.
Case Study: Failed Intelligence Program
A financial services company purchased an expensive commercial threat intelligence platform without defining requirements. Analysts ingested thousands of indicators daily with no prioritization. Leadership received generic threat reports that didn't address their specific concerns. The SOC was overwhelmed with alerts from indicators irrelevant to their environment. After two years, the program was cancelled as "not providing value." Post-mortem revealed: no requirements process, no consumer engagement, no feedback mechanisms. They had data, not intelligence.
Requirements Template:
Intelligence Requirements Document Template:
ORGANIZATION: [Name]
PERIOD: [Q1 2024, etc.]
APPROVED BY: [Leadership name]
DATE: [Date]
PRIORITY INTELLIGENCE REQUIREMENTS:
PIR-1: [Question]
Stakeholder: [Who needs this]
Decision Supported: [What decision]
Timeframe: [Ongoing, specific date]
Collection Sources: [Primary sources]
Supporting SIRs:
- SIR-1.1: [Specific question]
- SIR-1.2: [Specific question]
- SIR-1.3: [Specific question]
PIR-2: [Question]
[Same structure]
PIR-3: [Question]
[Same structure]
STANDING INTELLIGENCE REQUIREMENTS:
- [Ongoing monitoring topics]
REVIEW SCHEDULE:
- Quarterly requirement review
- Monthly collection assessment
- Weekly production meeting
The difference between successful and failed intelligence programs usually comes down to requirements and feedback—not tools or data sources.
Guided Lab: Requirements Development
In this lab, you'll develop intelligence requirements for a hypothetical organization, practicing the full requirements development process.
Lab Environment:
- Scenario description (provided organization profile)
- Requirements template
- Collection source reference
Exercise Steps:
- Review the organization profile (industry, assets, concerns)
- Identify key stakeholders and their decision needs
- Draft 3-5 Priority Intelligence Requirements
- Develop Supporting Intelligence Requirements for each PIR
- Map collection sources to requirements
- Identify collection gaps and propose solutions
- Define appropriate products for each consumer
Reflection Questions:
- How did stakeholder needs shape your requirements?
- What trade-offs did you make in prioritization?
- How would you validate these requirements with stakeholders?
Week Outcome Check
By the end of this week, you should be able to:
- Explain all phases of the intelligence lifecycle
- Develop Priority Intelligence Requirements from stakeholder needs
- Create Supporting Intelligence Requirements and EEIs
- Map collection sources to intelligence requirements
- Apply structured analytic techniques
- Express analytical confidence appropriately
- Design intelligence products for different audiences
- Implement feedback mechanisms for continuous improvement
🎯 Hands-On Labs (Free & Essential)
Build the intelligence lifecycle workflow before moving to reading resources.
🎮 TryHackMe: Cyber Defense Frameworks
What you'll do: Learn structured defense frameworks and lifecycle concepts.
Why it matters: Frameworks help align intel outputs with decisions.
Time estimate: 1.5-2 hours
📝 Lab Exercise: PIR/SIR Drafting
Task: Write 3 Priority Intelligence Requirements and 2 Supporting IRs each.
Deliverable: Requirement list mapped to stakeholders and decisions.
Why it matters: Good requirements prevent wasted collection effort.
Time estimate: 60-90 minutes
🧭 CISA Advisories: Collection-to-Product Mapping
What you'll do: Map one advisory to collection sources and outputs.
Why it matters: Connects lifecycle stages to real intel products.
Time estimate: 60-90 minutes
🧩 Lab: Supply Chain PIRs
What you'll do: Draft PIRs focused on third-party and vendor risk.
Deliverable: 3 PIRs tied to supplier criticality and exposure.
Why it matters: Supply chain questions require clear requirements.
Time estimate: 60-90 minutes
💡 Lab Tip: Define a consumer and decision for every intelligence product you create.
🧩 Third-Party Intelligence Requirements
Requirements must capture vendor risk, dependency exposure, and business-critical suppliers. This is where supply chain intelligence starts.
Example supply chain PIRs:
- Which vendors have admin access to production systems?
- Which suppliers were recently breached or exploited?
- What dependencies lack patch SLAs or SBOMs?
📚 Building on CSY101 Week-14: Align third-party risk monitoring with audit requirements.
Resources
Lab
Complete the following lab exercises to practice intelligence lifecycle and requirements development.