Last quarter, we facilitated a tabletop exercise for a financial services client. Their CISO was confident the IR team could handle a ransomware scenario. Forty-five minutes in, three participants couldn’t articulate what lateral movement looked like in their environment, and nobody could map the simulated attack to a single MITRE ATT&CK technique. The exercise exposed more gaps than a vulnerability scan on an unpatched Exchange server.
That engagement changed how we build every MITRE ATT&CK tabletop exercise at SSE. What follows is the checklist we now run through before, during, and after every scenario — whether it’s a board-level walkthrough or a deeply technical SOC drill.
Why ATT&CK Belongs in Every Tabletop Scenario
The MITRE ATT&CK framework — Adversarial Tactics, Techniques, and Common Knowledge — is a globally recognized repository of adversary behaviors. It catalogs real-world TTPs observed across hundreds of threat groups, from APT29 to FIN7 to Scattered Spider.
When you anchor tabletop scenarios to ATT&CK, you stop designing exercises around hypothetical threats and start testing against documented adversary tradecraft. That distinction matters. A scenario built around “what if ransomware hits us” is vague. A scenario built around T1566.001 (Spearphishing Attachment) → T1059.001 (PowerShell execution) → T1021.002 (SMB lateral movement) → T1486 (Data Encrypted for Impact) gives defenders specific detection points to evaluate.
ATT&CK also solves the tool-centric bias problem. Scenarios designed to test a specific SIEM or EDR product give you a narrow assessment that doesn’t reflect your actual threat landscape. The framework forces you to think in terms of adversary behavior, not product capabilities.
Pre-Exercise Checklist: Building the Scenario
Checkpoint 1 — Threat Intelligence Alignment
Pass criteria: You’ve identified 2-3 threat groups that actively target your industry and mapped their known TTPs from ATT&CK.
Fail criteria: You picked techniques because they sounded interesting, not because threat intel supports them as relevant to your sector.
Start with the ATT&CK Groups page. If you’re a healthcare org, look at FIN12’s ransomware operations. Financial services? Study APT38’s techniques. We ran a tabletop for an education sector client and built the entire scenario around Rhysida’s documented TTPs — the same group that had hit three universities in the preceding six months. That’s not hypothetical. That’s preparation.
Checkpoint 2 — Kill Chain Coverage
Pass criteria: Your scenario covers at least 4-5 ATT&CK tactics (columns in the matrix), creating a realistic attack chain from initial access through impact.
Fail criteria: The scenario only tests one phase, like “we got phished, now what?”
Map your scenario left to right across the ATT&CK matrix. A strong exercise might progress through Initial Access → Execution → Persistence → Lateral Movement → Exfiltration. Each node in the chain should reference a specific technique ID. During one engagement, we plotted three overlapping attack chains on a single matrix visualization — it showed the client exactly where their detection coverage had gaps across multiple adversary profiles.
Checkpoint 3 — Multi-Role Participation
Pass criteria: Your participant list includes CISO, security analysts, network administrators, legal, risk management, and PR/communications.
Fail criteria: Only the SOC team is in the room.
Tabletop exercises reveal communication breakdowns, role confusion, and procedural gaps. That only works when the right people are present. A ransomware scenario that doesn’t include legal counsel and PR means you’re skipping the hardest decisions — do we pay, do we disclose, who talks to the press. We always push clients to include non-technical stakeholders. The technical team knows how to isolate a host. The question is whether leadership knows when to authorize it.
Checkpoint 4 — No Tool Bias
Pass criteria: The scenario tests defensive capabilities across multiple layers without naming specific vendor products.
Fail criteria: The scenario is structured to validate a recent tool purchase.
This is a hard rule. When scenarios are biased toward testing a specific tool, the assessment becomes a product demo, not a security evaluation. Write injects that describe adversary behavior — “the attacker executes encoded PowerShell commands to enumerate domain trusts” — and let participants discuss how their stack would detect or miss it. If you need help designing tool-agnostic scenarios tailored to your environment, reach out to our team.
During-Exercise Checklist: Facilitating the Scenario
Checkpoint 5 — Inject Realism
Pass criteria: Each inject maps to a specific ATT&CK technique and includes alternative outcomes based on participant decisions.
Fail criteria: Injects are linear with only one correct answer.
Real breaches don’t follow a script. Your tabletop shouldn’t either. Build decision trees where participant choices lead to different technique chains. If the SOC detects the initial PowerShell execution (T1059.001), the attacker pivots to WMI (T1047). If they miss it, the attacker proceeds to credential dumping (T1003). This approach turns passive listeners into active participants — they argue, plan, and adapt, exactly like they’d need to during a real incident.
Checkpoint 6 — Time Pressure and Consequences
Pass criteria: The scenario includes realistic time constraints and escalation triggers.
Fail criteria: Participants have unlimited time to deliberate with no consequences for delays.
In February 2024, a ransomware group encrypted 40TB of hospital data in under four hours. The entry point was an unpatched VPN appliance. Your tabletop needs to simulate that pressure. We set timers on critical decision points — you have 10 minutes to decide whether to isolate the finance subnet. Delayed decisions trigger new injects. Hesitation has a cost.
Checkpoint 7 — Detection Mapping in Real Time
Pass criteria: For each technique presented, participants can articulate what log sources, alerts, or threat hunting procedures would detect it.
Fail criteria: Participants describe responses but can’t explain how they’d detect the activity in the first place.
This is where most teams stumble. Clicking on any technique in the ATT&CK matrix links to a page with detection and mitigation strategies. If your team can’t map T1021.002 (SMB/Windows Admin Shares) to specific log sources in your SIEM, that’s a finding. Document it.
Post-Exercise Checklist: Turning Findings into Action
Checkpoint 8 — Gap Analysis Against ATT&CK
Pass criteria: You produce a heat map showing which ATT&CK techniques your team detected, partially detected, or missed entirely.
Fail criteria: The after-action report says “the team performed well” with no measurable data.
We build ATT&CK Navigator layers for every client engagement. Green for detected, yellow for partially detected, red for missed. One managed services client discovered they had zero detection capability across the entire Lateral Movement tactic column. That single visualization drove a six-figure investment in network monitoring — because the gap was undeniable when mapped against real adversary behavior.
Checkpoint 9 — Remediation Prioritization
Pass criteria: Gaps are prioritized based on threat intelligence relevance, not just severity.
Fail criteria: Every gap is marked “critical” with no ranking.
Not every missed technique is equally dangerous. Prioritize based on which threat groups target your industry and which techniques they actually use. A gap in detecting T1053.005 (Scheduled Task) matters more if APT groups active in your sector rely on it. This is where threat intelligence and tabletop exercises converge — the exercise identifies gaps, and CTI tells you which gaps to fix first.
Checkpoint 10 — Scenario Evolution Plan
Pass criteria: You have a schedule for follow-up exercises that introduce more advanced techniques based on the evolving threat landscape.
Fail criteria: The exercise is treated as a one-time compliance checkbox.
Threat actors evolve. Your exercises must too. As your team gains experience, introduce supply chain compromise scenarios, multi-vector simultaneous attacks, and cloud-specific attack chains. Each iteration should reference updated ATT&CK techniques and reflect the latest threat research. We schedule quarterly exercises for our IT consulting clients, with each round building on the gaps identified in the previous session.
One Caveat Worth Stating
ATT&CK is not a silver bullet. The framework catalogs known techniques — it doesn’t predict novel tradecraft. A tabletop exercise built entirely on ATT&CK still misses zero-day exploitation chains and techniques that haven’t been publicly documented yet. Use ATT&CK as your foundation, but leave room in your scenarios for the unexpected. The best exercises include at least one inject that forces creative thinking beyond the matrix.
The Practical Takeaway
Run this 10-point checklist against your next tabletop exercise. If you’re failing more than two checkpoints, your exercises are probably generating false confidence rather than real preparedness. The goal isn’t to check a compliance box — it’s to find out where your team breaks before an actual adversary does it for you. Map every scenario to ATT&CK technique IDs, involve the right stakeholders, document gaps with data, and iterate. That’s how you turn a meeting room exercise into operational security improvement.


