Red teams attack. Blue teams defend. Organisations that run structured adversarial exercises between the two are 2.5x more likely to detect advanced persistent threats before data exfiltration (SANS Adversarial Operations Report, 2025). The model is borrowed from military war games, and it works because it forces your defences to prove themselves against a skilled, motivated adversary rather than a checklist.
This article covers how the two teams operate, how purple teaming bridges them, what metrics to track, and how to build a programme from scratch.
What Does the Red Team Do?
A red team is a group of offensive security professionals who simulate real attacks against your organisation. They think and act like a threat actor.
Responsibilities:
- Emulate the TTPs of threat actors relevant to your industry
- Exploit weaknesses in systems, processes, and human behaviour
- Run phishing campaigns, vishing calls, and physical intrusion attempts
- Operate covertly to test detection capabilities
- Identify attack chains from initial access to critical objectives
- Report findings with attack narratives, detection gaps, and remediation priorities
Skill profile: Network and infrastructure exploitation, web/mobile application security, cloud security (AWS, Azure, GCP), Active Directory attacks, custom tool development, C2 management, social engineering, physical intrusion, OPSEC, threat intelligence.
Median experience: 7+ years in information security, versus 4+ for pen testers (CREST, 2025). Typical certifications: CREST CCSAS/CCSAM, OSEP, OSCE3, GXPN.
Across 500+ engagements, RedTeam Partners has breached 9 in 10 targets. A skilled red team will almost always get in. The question is how far and how fast, and whether anyone noticed.
For a full overview: What Is Red Teaming? The Complete Guide.
What Does the Blue Team Do?
The blue team is your defensive security function. They protect information systems against both real and simulated attacks.
Responsibilities:
- Operate SIEM, EDR, and NDR platforms
- Detect suspicious activity through alert triage, log analysis, and threat hunting
- Respond to incidents: contain, eradicate, recover
- Manage vulnerabilities: identify, prioritise, coordinate remediation
- Design security architecture and defence-in-depth strategies
- Integrate threat intelligence feeds and IOCs
- Harden systems against known attack paths
- Conduct digital forensics
Skill profile: Security monitoring, network traffic analysis, log correlation, incident response, forensics, threat intelligence, security architecture, vulnerability management, malware analysis.
Certifications: GCIA, GCIH, GCFE, GCFA, OSDA, BTL1/BTL2, Microsoft SC-200, AWS Security Specialty.
The Fundamental Blue Team Challenge
Blue teams face an asymmetry problem: the attacker needs to find one way in. The defender must cover every path. Median dwell time globally is 11 days (Mandiant M-Trends, 2025), down from 16 days in 2023. Progress, but not parity.
How Do Red and Blue Teams Work Together?
The value is not in either team working alone. It is in the friction between them.
The Exercise Cycle
- Planning. Red team and a trusted agent (typically the CISO) define scope, objectives, and rules of engagement. Blue team is not informed.
- Red team operations. Reconnaissance, initial access, lateral movement, objective execution. Full stealth.
- Blue team response. Normal operations continue. Any alerts from red team activity should trigger investigation and response as if the threat were real.
- Documentation. Both teams log everything. Red team: techniques, timestamps, outcomes. Blue team: alerts, investigations, response actions.
- Debrief. Both teams review the engagement together. Red team walks through the attack narrative. Blue team identifies what they caught and what they missed.
- Remediation. The organisation builds a prioritised improvement plan targeting detection gaps, response weaknesses, and control deficiencies.
- Retest. The next engagement validates that improvements hold up.
Metrics That Matter
| Metric | What It Measures | Direction |
|---|---|---|
| MTTD | How fast the blue team spots red team activity | Should decrease |
| MTTR | How fast containment and remediation happen | Should decrease |
| Detection coverage | % of red team techniques that triggered alerts | Should increase |
| Alert accuracy | True positive ratio for red team activity | Should increase |
| Kill chain progression | How far the red team got before detection | Detection should happen earlier |
| Objective achievement | Whether the red team hit its flags | Fewer objectives achieved over time |
| Response effectiveness | Did containment actually stop the red team? | Should increase |
NIST SP 800-53 Rev. 5, Control CA-8 Enhancement 2 explicitly recommends using red team exercises to assess blue team capabilities and drive measurable improvements.
What Is a Purple Team?
Purple teaming is not a separate team. It is a collaborative function that blends red team offence with blue team defence. Red plus blue equals purple.
How It Works
- Red team executes a specific technique (mapped to MITRE ATT&CK).
- Blue team tries to detect it with existing tools.
- Detected: document the source, alert fidelity, and response time.
- Not detected: both teams figure out why and implement new detection.
- Red team re-runs the technique to validate the fix.
- Repeat for the next technique.
What Purple Teaming Delivers
- Real-time feedback instead of waiting for an end-of-engagement report
- Rapid detection improvement through iterative test-fix-validate cycles
- Knowledge transfer from red to blue. Offensive insight sharpens defensive instinct
- Cost efficiency. Significant detection gains in shorter timeframes
- Granular technique-level visibility into your detection posture
Organisations running quarterly purple team exercises achieve a 62% improvement in ATT&CK detection coverage within 12 months, a 41% reduction in MTTD, and a 3.2x increase in actionable detection rules deployed per quarter (SANS Purple Team Report, 2025).
Red Team vs Blue Team vs Purple Team
| Dimension | Red Team | Blue Team | Purple Team |
|---|---|---|---|
| Role | Attack | Defend | Improve together |
| Objective | Test resilience through realistic attacks | Protect and respond | Maximise detection and response |
| Approach | Covert, objective-based | Continuous monitoring | Iterative technique testing with feedback |
| Duration | 4 to 12 week engagements | Permanent function | 1 to 5 day exercises or ongoing programme |
| SOC awareness | Not informed | Is the SOC | Participates actively |
| Output | Attack narrative, detection gap analysis | Incident reports, security metrics | Detection rules, coverage matrices |
| Frequency | Annual or semi-annual | Continuous | Quarterly or post-red team |
| Team type | Usually external | Internal or MSSP | Joint red + blue |
| ATT&CK use | Emulate techniques | Detect techniques | Map, test, close gaps |
Building a Red/Blue Team Programme
Step 1: Check Your Foundations
Before you schedule a red team engagement, confirm you have:
- A functioning SOC (internal or outsourced)
- SIEM, EDR, and NDR deployed and configured
- A documented incident response plan
- A history of regular pen testing
- Trained, certified security staff
Red teaming an immature blue team produces obvious findings without driving real improvement. You need a baseline to test against.
Step 2: Set Programme Objectives
- Detection improvement: increase ATT&CK coverage
- Response speed: reduce MTTD and MTTR
- Resilience validation: confirm you can withstand specific threat scenarios
- Compliance: meet TIBER-EU, CBEST, DORA requirements
- Board reporting: generate evidence of security effectiveness
Step 3: Document Rules of Engagement
Cover scope (in and out), prohibited activities, emergency stop procedures, communication protocols, legal authorisation, and data handling.
Step 4: Choose Your Red Team Model
Internal red team. Year-round availability. Deep organisational knowledge. Expensive to staff and retain. Risk of losing objectivity over time.
External red team. Fresh perspective. Diverse experience across industries. Higher per-engagement cost. No organisational bias.
Hybrid. Internal knowledge plus external expertise. Most mature organisations land here.
Distribution: 28% internal only, 47% external only, 25% hybrid (SANS Security Operations Survey, 2025).
Step 5: Resource Your Blue Team
NIST recommends blue team investment at 3 to 5x the red team budget.
- Technology. SIEM, EDR on all endpoints, NDR, SOAR.
- Training. ATT&CK-based detection engineering. GCIA, GCIH, GCFA, OSDA.
- Processes. Incident response playbooks. Escalation procedures. Communication plans.
- Threat intelligence. Feed integration for detection priority.
- Threat hunting. Proactive search for activity that evades automated detection.
Step 6: Execute on a Cadence
| Activity | Frequency | Duration |
|---|---|---|
| Vulnerability assessment | Continuous | Ongoing |
| Pen testing | Quarterly (rotating scope) | 1 to 4 weeks |
| Purple team | Quarterly | 2 to 5 days |
| Red team | Annual or semi-annual | 4 to 12 weeks |
| Tabletop exercise | Semi-annual | Half day to full day |
| Blue team training | Monthly | 2 to 4 hours |
Common Programme Failures
Blue team demoralisation. The red team keeps winning. Defenders feel hopeless. Fix: frame exercises as improvement opportunities. Celebrate detection successes. A skilled red team will always achieve some access. The metric is improvement over time, not perfection.
Scope too narrow. Politics or stability concerns restrict the red team. Result: unrealistic exercises. Fix: expand scope gradually. Use assumed breach for sensitive environments. Document the risk of artificial constraints for leadership.
Blue team under-resourced. You invested in red teaming without equipping your defenders. Same gaps surface every year. Fix: follow the NIST 3-5x blue team investment ratio.
No remediation follow-through. Findings documented but never fixed. 38% of organisations fail to remediate more than half of red team findings within 12 months (Mandiant, 2025). Fix: executive-sponsored tracking. Clear owners. Deadlines. Remediation progress in leadership reporting.
Teams operating in silos. No knowledge transfer between offence and defence. Fix: run purple team exercises. Joint training. Technique demos. Shared documentation.
MITRE ATT&CK in Practice
Red Team Usage
- Select techniques relevant to the target’s threat landscape
- Build adversary emulation plans from ATT&CK group profiles
- Log all activity by technique ID
Blue Team Usage
- Build and tune detection rules per ATT&CK technique
- Map existing detection against the full matrix to find gaps
- Develop threat hunting queries based on technique patterns
- Classify incidents using ATT&CK terminology
Purple Team Usage
- Maintain detection coverage matrices (red/green/yellow by technique)
- Prioritise testing by techniques most used against your industry
- Track coverage improvement over time
Organisations that use ATT&CK to guide their programmes achieve 56% higher detection coverage for relevant adversary techniques (MITRE, 2025).
Real-World Outcomes
European bank, TIBER-EU. Red team gained initial access through treasury department phishing. Blue team missed lateral movement for 8 days. After investment in EDR, AD monitoring, and a threat hunting team, the next TIBER-EU exercise detected lateral movement in 6 hours. 97% improvement.
Swiss hospital group, assumed breach. Red team went from a compromised workstation to patient health records in 72 hours. Blue team detected 2 of 14 techniques. After purple teaming, detection coverage rose from 14% to 71% of tested ATT&CK techniques in 6 months.
European energy company, IT/OT. Red team pivoted from corporate IT into operational technology through misconfigured segmentation. Led to a full redesign of the IT/OT boundary and deployment of OT-specific monitoring.
Key Statistics
- 2.5x more likely to detect APTs with structured red/blue programmes (SANS, 2025)
- 11 days median dwell time globally, down from 16 in 2023 (Mandiant, 2025)
- 62% ATT&CK detection improvement with quarterly purple teaming (SANS, 2025)
- 56% higher detection coverage when ATT&CK guides the programme (MITRE, 2025)
- 38% of organisations fail to remediate >50% of red team findings within 12 months (Mandiant, 2025)
- 74% faster breach detection for organisations with red team programmes (IBM, 2025)
- USD 4.44M average breach cost (IBM, 2025)
Frequently Asked Questions
Do I need both a red team and a blue team?
Every organisation needs a defensive security function. Not every organisation needs a dedicated red team. Most use an external provider for periodic engagements while maintaining a permanent blue team.
Can a small organisation run these exercises?
Yes. Use an external red team for focused or assumed breach engagements. Partner with an MSSP for blue team capabilities. Purple team exercises deliver significant value even with limited resources.
How do I measure programme success?
Track MTTD and MTTR trends. Measure ATT&CK detection coverage. Monitor remediation rates. Assess blue team response quality between exercises. The key metric: are your defences measurably improving each cycle?
Should red and blue teams communicate during an exercise?
During a red team engagement: no. The blue team should not know testing is occurring. The red team leader maintains a channel with a trusted agent for safety. During purple team exercises: open communication is the whole point.
What if the blue team detects the red team?
Good. The red team may adapt and continue with different techniques, as a real attacker would. If detection leads to effective containment, the engagement may end early. Both outcomes produce useful data.
How do I justify the cost to leadership?
Average breach cost: USD 4.44 million (IBM, 2025). Red team engagement cost: USD 50,000 to 500,000. The maths is straightforward. Add regulatory requirements, cyber insurance benefits, and the competitive advantage of demonstrable security maturity.
Sources
- IBM Cost of a Data Breach Report 2025 — confirms global average breach cost of $4.44M (2025 data)
- Mandiant M-Trends 2025 — confirms global median dwell time of 11 days