Week Introduction
Cybersecurity is not about hackers and firewalls. It is about systems under uncertainty — managing risk when you cannot predict every threat, control every user, or eliminate every vulnerability.
This week establishes the foundational lens: cybersecurity as a discipline of risk reasoning, systems thinking, and trade-off justification. Everything you learn for the next three years builds on this foundation.
Learning Outcomes (Week 1 Focus)
By the end of this week, you should be able to:
- LO1 - Systems Thinking: Explain cybersecurity problems as properties of systems, not isolated technical failures
- LO2 - Asset & Risk Reasoning: Identify what needs protection and why, using the CIA triad and risk formula (Threat × Vulnerability × Impact)
- LO3 - Framework Application: Map security activities to the NIST Cybersecurity Framework's five functions (Identify, Protect, Detect, Respond, Recover)
- LO4 - Professional Ethics: Explain ethical responsibilities in cybersecurity, including responsible disclosure and legal boundaries
- LO5 - Risk Communication: Translate technical security concepts into business language for non-technical stakeholders
- LO8 - Integration: Begin constructing a coherent security narrative (foundation for later synthesis)
Lesson 1.1 · Cybersecurity Is a Systems Problem
"Cybersecurity = preventing hackers from breaking in."
Reality: Cybersecurity is about protecting socio-technical systems — systems built by humans, operated by humans, and connected to other systems. Security failures emerge from design choices, not just attacker skill.
Example: When a hospital's patient records are exposed, the problem is rarely "the hacker was too good." It's usually: weak access controls (design), misconfigured databases (operation), or unpatched software (maintenance). The system was vulnerable by design.
Why this matters: If you think security is about technology alone, you'll miss the human, process, and organizational factors that create most vulnerabilities. Systems thinking means asking: "What assumptions does this system make? Where do they fail?"
Lesson 1.2 · The CIA Triad (Foundational Risk Model)
The CIA Triad is the oldest and most durable model in cybersecurity. It defines three fundamental properties that security aims to protect:
-
Confidentiality: Information is accessible only to those authorized to access
it.
Failure example: Patient medical records posted publicly due to misconfigured cloud storage. -
Integrity: Information and systems are accurate and have not been tampered
with.
Failure example: An attacker modifies transaction amounts in a banking system. -
Availability: Authorized users can access systems and data when needed.
Failure example: A ransomware attack encrypts hospital systems during an emergency.
These three properties are often in tension. Maximum security (confidentiality) can reduce availability. Perfect availability might compromise integrity checks. Your job is to balance these trade-offs based on context.
Lesson 1.3 · Perfect Security Does Not Exist (Trade-offs Are Inevitable)
The security trilemma: You cannot simultaneously maximize security, usability, and cost-effectiveness. Every system makes trade-offs.
Real-world examples:
- Multi-factor authentication (MFA): Increases security (harder to compromise) but reduces usability (more steps to log in). Organizations must decide: Is the friction worth the protection?
- Full disk encryption: Protects confidentiality if a laptop is stolen, but slightly reduces performance and complicates recovery if the user forgets their password.
- Open vs. closed networks: An air-gapped (isolated) network is very secure but sacrifices convenience and connectivity. A fully open network is convenient but exposes more attack surface.
Mature cybersecurity professionals don't seek "perfect security." They seek acceptable risk — reducing exposure to a level the organization can justify given its resources, mission, and threat environment.
Lesson 1.4 · Risk = Threat × Vulnerability × Impact
Risk is not binary (safe/unsafe). It is a function of three variables:
- Threat: What or who could cause harm? (Attackers, accidents, natural events)
- Vulnerability: What weakness could they exploit? (Bugs, misconfigurations, human error)
- Impact: How bad would the damage be? (Data loss, financial harm, reputational damage)
Example: A university learning management system (LMS) stores student grades.
- Threat: Motivated students (or external attackers) who want to change grades
- Vulnerability: Weak authentication (passwords only, no MFA) and insufficient access logging
- Impact: Grade tampering undermines academic integrity, potential legal consequences, loss of accreditation
Risk calculation: Even if the vulnerability exists, if there is no credible threat (e.g., the system is offline and air-gapped), risk is low. Conversely, even a small vulnerability becomes high-risk if the impact is catastrophic (e.g., medical device failure).
Lesson 1.5 · Why "Checklist Security" Fails
Trap for beginners: Thinking security is about following a checklist ("install antivirus, enable firewall, done").
Checklists assume static threats and universal solutions. Real systems are dynamic, contexts vary, and attackers adapt. A control that works for one organization might be useless (or harmful) for another.
Principle-based security — understand the "why" behind controls, then adapt them to your specific system, threats, and constraints.
Lesson 1.6 · The NIST Cybersecurity Framework (Your Professional Roadmap)
Why this matters: The NIST Cybersecurity Framework (CSF) is the most widely adopted security framework globally. Fortune 500 companies, government agencies, and security professionals use this structure to organize security programs. Understanding it now gives you a mental model you'll use for your entire career.
The Five Core Functions: NIST CSF organizes all security activities into five functions:
-
1. Identify: Understand your assets, risks, and vulnerabilities.
Question to ask: "What needs protection, and what are the risks?"
Example activities: Asset inventory, risk assessment, threat identification -
2. Protect: Implement controls to reduce risk.
Question to ask: "How do we prevent or reduce the likelihood of incidents?"
Example activities: Access controls, encryption, security training, network segmentation -
3. Detect: Discover security events when they occur.
Question to ask: "How quickly can we identify when something goes wrong?"
Example activities: Log monitoring, intrusion detection, anomaly detection -
4. Respond: Take action when incidents are detected.
Question to ask: "What do we do when an incident happens?"
Example activities: Incident response plans, containment, forensics, communication -
5. Recover: Restore systems and services after an incident.
Question to ask: "How do we return to normal operations and improve?"
Example activities: Backup restoration, business continuity, lessons learned
These five functions form a cycle, not a checklist. You continuously identify new risks, protect against them, detect incidents, respond to them, and recover while learning. This week's focus is primarily on Identify (understanding what you're protecting and why).
Connection to this week: The CIA Triad (Lesson 1.2) maps to the "Protect" function. The risk formula (Lesson 1.4) maps to the "Identify" function. You're already learning the framework—now you have professional vocabulary for it.
Lesson 1.7 · Professional Ethics in Cybersecurity
Why start with ethics? Cybersecurity professionals have extraordinary power: access to sensitive data, ability to bypass controls, knowledge of vulnerabilities. With this power comes ethical responsibility. Understanding professional ethics isn't optional—it's foundational.
Core Ethical Principles (ACM Code of Ethics):
- Do no harm: Your skills should protect, not exploit. Even in offensive security roles (penetration testing, red teams), you operate with explicit authorization and defined scope.
- Respect privacy: Just because you can access data doesn't mean you should. Access only what's necessary for your authorized role.
- Act professionally: Maintain confidentiality, disclose responsibly, honor agreements (NDAs, contracts, scope limits).
- Contribute to society: Use your knowledge to improve security for everyone, not just those who can pay. This includes responsible disclosure of vulnerabilities.
Real-world example - Responsible Disclosure: Imagine you discover a critical vulnerability in a popular website while practicing your skills. The ethical path:
- Do NOT exploit it for personal gain
- Do NOT publicly disclose it immediately (gives attackers a blueprint)
- Privately notify the organization with details to fix it
- Give them reasonable time to patch (typically 90 days)
- Only then (if appropriate) publish findings to help the community learn
Legal boundaries: "Testing" on systems you don't own or have permission to test is illegal in most jurisdictions (Computer Fraud and Abuse Act in the US, Computer Misuse Act in the UK). Always get explicit written authorization before security testing.
Professional certifications and ethics: Organizations like (ISC)², EC-Council, and ISACA require certified professionals to agree to codes of ethics. Violations can result in certification revocation and legal consequences.
Lesson 1.8 · Communicating Risk to Non-Technical Stakeholders
Essential skill: Technical security knowledge is only valuable if you can explain it to decision-makers who control budgets and priorities. Executives, board members, and business leaders rarely have deep technical backgrounds.
Common mistake: "We need to patch CVE-2024-12345 with CVSS score 9.8 affecting our Apache Struts servers because of remote code execution via OGNL injection."
"Our public-facing web servers have a critical vulnerability. Attackers could gain complete control of these systems and access customer data. This is the same type of vulnerability that led to the Equifax breach (143 million records exposed, $700M+ cost). We need to apply the security update this week. The risk is high, and the fix is available."
Framework for risk communication:
- What's at risk? (Customer data, revenue, reputation—not "the database")
- What could happen? (Data breach, system downtime, regulatory fines—concrete impacts)
- How likely is it? (High/Medium/Low with context, not just numbers)
- What do we do about it? (Clear recommendations with costs and timelines)
- What happens if we don't act? (Business consequences, not just technical)
Practice opportunity: In Lab 1, you identified assets and CIA properties. Practice explaining ONE of your findings to someone without a technical background (friend, family member). Can they understand the risk without knowing what SQL injection or buffer overflows are?
Self-Check Questions (Test Your Understanding)
Answer these in your own words (2-3 sentences each):
- What makes cybersecurity a "systems problem" rather than just a technology problem?
- Explain the CIA Triad and give one real-world example of each property failing.
- Why is "perfect security" impossible? Give one concrete trade-off example.
- In the risk formula (Threat × Vulnerability × Impact), why do all three factors matter? Can you have high risk with a low-impact event?
- What is the difference between a security checklist and principle-based security?
- NEW: Which of the five NIST CSF functions (Identify, Protect, Detect, Respond, Recover) is most relevant to the CIA Triad? Explain your reasoning.
- NEW: Why is responsible disclosure important? What could happen if you immediately published a critical vulnerability you discovered?
- NEW: Translate this technical statement for a non-technical executive: "We need to implement multi-factor authentication because password-only authentication has a high risk of credential theft."
Lab 1 · Systems Thinking: Mapping Assets, Boundaries, and Risk
Time estimate: 30-45 minutes
Objective: Apply systems thinking to a real system you use daily. By the end, you will identify what needs protection (assets), where risk exists (vulnerabilities), and what failure looks like (impact).