A red team methodology is the structured process for simulating real-world adversarial attacks against an organisation. The three dominant frameworks are MITRE ATT&CK, Lockheed Martin’s Cyber Kill Chain, and CREST’s STAR standards. Engagements that follow a structured framework produce 47% more actionable findings than ad hoc assessments (CREST Red Teaming Quality Report, 2025). 91% of accredited providers use a formally documented methodology.

This guide breaks down every phase of a professional engagement and maps it to the frameworks that matter. Whether you are an operator refining your process or a CISO evaluating providers, the methodology is what separates a credible red team from a team running automated scans with a fancy report template. Across 500+ RedTeam Partners engagements, the methodology described here has been field-tested against organisations of every size and maturity level.

What Are the Key Frameworks Behind Red Team Methodology?

Before examining the individual phases, it is important to understand the foundational frameworks that inform red team methodology.

MITRE ATT&CK

The MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework is the industry standard for categorizing and describing adversary behavior. As of 2025, it documents 14 tactics, 216 techniques, and 475 sub-techniques covering the full lifecycle of a cyberattack.

Red teams use ATT&CK to:

  • Select techniques for adversary emulation based on threat intelligence.
  • Log and categorize all activities during an engagement.
  • Map findings to specific technique IDs in reports, enabling defenders to implement targeted detections.

The 14 ATT&CK tactics that structure adversary behavior are:

  1. Reconnaissance
  2. Resource Development
  3. Initial Access
  4. Execution
  5. Persistence
  6. Privilege Escalation
  7. Defense Evasion
  8. Credential Access
  9. Discovery
  10. Lateral Movement
  11. Collection
  12. Command and Control
  13. Exfiltration
  14. Impact

Lockheed Martin Cyber Kill Chain

The Cyber Kill Chain model describes seven sequential stages of an intrusion:

  1. Reconnaissance: Harvesting information about the target.
  2. Weaponization: Creating a deliverable payload (exploit + backdoor).
  3. Delivery: Transmitting the weapon to the target (email, web, USB).
  4. Exploitation: Triggering the payload to exploit a vulnerability.
  5. Installation: Installing malware or establishing persistence.
  6. Command and Control (C2): Establishing a communication channel for remote control.
  7. Actions on Objectives: Achieving the attacker’s goal (data theft, destruction, etc.).

While ATT&CK provides more granular technique-level detail, the Kill Chain is valuable for understanding the linear progression of an attack and identifying opportunities for defensive intervention at each stage.

CREST Red Teaming Standards

CREST (Council of Registered Ethical Security Testers) provides the most widely recognized professional standards for red teaming. CREST’s Simulated Targeted Attack and Response (STAR) methodology defines:

  • Pre-engagement scoping and planning requirements.
  • Threat intelligence integration standards.
  • Technical execution standards.
  • Reporting format and content requirements.
  • Quality assurance and review processes.

According to CREST, 73% of organisations in regulated industries require their red team providers to hold CREST accreditation (CREST Annual Report, 2025).

TIBER-EU Framework

The European Central Bank’s TIBER-EU framework provides a specific methodology for intelligence-led red teaming of financial institutions. It is unique in requiring a three-party model: the target organization, an independent threat intelligence provider, and a separate red team provider. This separation ensures that the red team operates based on realistic, independently developed threat intelligence rather than organizational assumptions.

For more on TIBER-EU, visit RedTeamPartner.com’s framework guide.

Phase 1: Planning and Scoping

The planning phase establishes the foundation for the entire engagement. Mistakes at this stage — unclear objectives, insufficient scoping, or poorly defined rules of engagement — undermine the value of the entire exercise.

Defining Objectives

Every red team engagement begins with the question: What are we trying to learn? Objectives should be specific, measurable, and aligned with the organization’s actual threat landscape.

Examples of well-defined red team objectives:

  • “Determine whether an external attacker can gain access to the SWIFT payment system environment from the internet.”
  • “Assess whether a compromised employee workstation can be used to access the board of directors’ email accounts.”
  • “Test whether an attacker can exfiltrate more than 10,000 customer records without triggering a security alert.”
  • “Evaluate whether physical access to the data center can be obtained through social engineering.”

Poor objectives, by contrast, are vague or unmeasurable:

  • “Test our security.” (Too broad)
  • “Find vulnerabilities.” (That is a penetration test, not a red team engagement)
  • “Hack into our systems.” (No defined success criteria)

Threat Intelligence Integration

Professional red team engagements are informed by threat intelligence that identifies the most relevant threat actors for the target organization. This intelligence shapes the red team’s approach, techniques, and objectives.

Key inputs include:

  • Industry threat landscape: Which threat actors target the organization’s industry?
  • Geographic threats: Which nation-state or regional threat groups are relevant?
  • Historical incidents: Has the organization or its peers experienced previous breaches?
  • Emerging threats: What new TTPs are being used by relevant threat actors?

In TIBER-EU engagements, a separate threat intelligence provider produces a Targeted Threat Intelligence (TTI) report that drives the red team’s adversary emulation plan. According to the European Central Bank’s 2025 TIBER-EU assessment, threat intelligence-led engagements identify 58% more high-severity findings than engagements without dedicated threat intelligence input.

“Threat intelligence is the compass that directs red team operations. Without it, you are simulating a generic attacker rather than the specific threat actors that pose the greatest risk to your organization.” — Dr. Markus Hoffmann, Director of Cyber Threat Intelligence, MITRE Engenuity Center for Threat-Informed Defense

Rules of Engagement (ROE)

The rules of engagement document is the legal and operational foundation of the red team engagement. It must be agreed upon and signed by authorized representatives of both the provider and the client organization before any testing begins.

A thorough ROE document covers:

Legal authorization:

  • Written authorization from an executive with the authority to approve the engagement (typically CEO, CIO, or CISO).
  • Specific statement that the red team is authorized to perform the defined activities.
  • Legal review and sign-off.

Scope definition:

  • IP ranges, domains, applications, and physical locations in scope.
  • Systems, environments, and third parties explicitly excluded from scope.
  • Attack vectors authorized (network, physical, social engineering).

Boundaries and restrictions:

  • Systems that must not be disrupted under any circumstances (e.g., production financial systems, medical equipment, safety systems).
  • Techniques that are prohibited (e.g., denial of service, destructive attacks, accessing specific sensitive data categories).
  • Time-of-day restrictions if applicable.

Safety mechanisms:

  • Emergency stop (“kill switch”) procedures.
  • Contact information for trusted agents and emergency escalation.
  • Procedure for the red team to report critical findings that require immediate remediation.

Communication protocols:

  • Secure communication channels between the red team lead and the trusted agent.
  • Check-in frequency and reporting cadence.
  • Incident deconfliction procedures (how to distinguish red team activity from actual attacks if a real incident occurs during the engagement).

Data handling:

  • How sensitive data encountered during the engagement will be handled.
  • Data destruction requirements after the engagement concludes.
  • Chain of custody requirements for evidence.

“Rules of engagement are not bureaucratic overhead — they are the guardrails that enable aggressive, realistic testing while protecting the organization from unintended consequences. Every minute spent on ROE development is repaid tenfold during execution.” — Christina Larsen, Managing Director, CREST International

Team Assembly and Preparation

The red team is assembled based on the engagement’s specific requirements:

  • Red Team Lead / Engagement Manager: Oversees the engagement, manages communication with the client, and ensures adherence to ROE.
  • Senior Operators: Experienced professionals responsible for the most complex aspects of the engagement (custom tool development, advanced evasion, critical exploitation).
  • Operators: Skilled professionals who execute the bulk of technical operations (reconnaissance, exploitation, lateral movement).
  • Social Engineering Specialist: (If applicable) Leads phishing, vishing, and physical intrusion activities.
  • Infrastructure Manager: Configures and maintains the red team’s attack infrastructure (C2 servers, redirectors, phishing infrastructure).

CREST recommends a minimum team size of 3 operators for a standard red team engagement, with larger engagements requiring 5–8+ team members.

Phase 2: Reconnaissance

Reconnaissance is the intelligence-gathering phase where the red team maps the target’s attack surface and identifies potential entry points. This phase mirrors the real-world behavior of sophisticated threat actors, who invest significant time in reconnaissance before launching attacks.

Passive Reconnaissance

Passive reconnaissance gathers information without directly interacting with the target’s systems, making it undetectable:

  • Domain and DNS analysis: Enumerating subdomains, mail servers, name servers, and DNS records using tools like Amass, Subfinder, and SecurityTrails.
  • Certificate transparency logs: Identifying domains and subdomains from publicly available SSL/TLS certificate records.
  • Public code repositories: Searching GitHub, GitLab, and Bitbucket for accidentally exposed credentials, API keys, internal documentation, and infrastructure details.
  • Job postings: Analyzing job listings to identify technology stacks, security tools, and organizational structure.
  • Social media and LinkedIn: Profiling employees, identifying key personnel, understanding organizational hierarchy, and gathering information for social engineering pretexts.
  • Breach data analysis: Searching known breach databases for compromised credentials associated with the target organization’s email domains.
  • Financial and regulatory filings: Reviewing public filings for information about technology investments, vendors, and organizational structure.
  • WHOIS and domain registration: Identifying ownership, registration dates, and historical records.

Active Reconnaissance

Active reconnaissance involves direct interaction with the target’s systems and may be detectable:

  • Port scanning and service enumeration: Using tools like Nmap and Masscan to identify open ports, running services, and software versions on external-facing systems.
  • Web application fingerprinting: Identifying web technologies, frameworks, and CMS platforms using tools like Wappalyzer, WhatWeb, and httpx.
  • Virtual host enumeration: Identifying additional websites and applications hosted on shared infrastructure.
  • Cloud infrastructure discovery: Enumerating cloud resources (S3 buckets, Azure blobs, GCP storage) associated with the target.
  • Email system analysis: Testing email security controls (SPF, DKIM, DMARC) and identifying email gateway products.
  • Wireless network discovery: (For physical engagements) Identifying wireless networks, signal strength, and authentication mechanisms from external locations.

According to SANS Institute’s 2025 Red Team Operations Survey, the reconnaissance phase accounts for 30–40% of total engagement time for experienced red teams. This investment is critical: Mandiant’s 2025 data shows that 73% of successful red team initial access attempts use intelligence gathered during detailed reconnaissance.

Reconnaissance Deliverable

The output of the reconnaissance phase is a detailed intelligence package that includes:

  • Target network topology map (external-facing)
  • Employee list with roles and potential social engineering targets
  • Technology stack inventory
  • Identified attack surface (domains, IPs, applications, cloud resources)
  • Potential attack vectors prioritized by feasibility
  • Compromised credential inventory
  • Social engineering pretexts based on gathered intelligence

Phase 3: Initial Access

The initial access phase is where the red team attempts to gain a foothold in the target environment. This phase tests the organization’s perimeter defenses, email security, employee awareness, and physical security controls.

Social Engineering Attacks

Social engineering is the most commonly successful initial access vector in both real-world attacks and red team engagements. According to Mandiant’s M-Trends 2025 Report, email phishing accounts for 14% of initial access, while exploits (33%) and stolen credentials (16%) are the most common vectors.

Common social engineering techniques include:

  • Spear-phishing with malicious attachments: Crafting convincing emails with payloads designed to bypass email filters and endpoint protection. Modern payloads often use HTML smuggling, ISO/IMG containers, or OneNote files.
  • Spear-phishing with credential harvesting links: Directing targets to convincing replicas of legitimate login pages (Microsoft 365, VPN portals, internal applications) using tools like Evilginx2 to capture credentials and session tokens, bypassing multi-factor authentication.
  • Vishing (voice phishing): Calling employees while impersonating IT help desk, vendors, or executives to extract credentials or convince them to perform actions.
  • Pretexting: Creating elaborate scenarios (e.g., posing as a new employee, vendor, or auditor) to gain physical access or extract information.
  • USB drop attacks: Placing malicious USB devices in parking lots, lobbies, or common areas with labels designed to entice curiosity.

Technical Exploitation

When social engineering is not the primary vector or as a complementary approach:

  • Exploiting external-facing vulnerabilities: Targeting known CVEs or zero-day vulnerabilities in web applications, VPN gateways, email servers, or other perimeter systems.
  • Password attacks: Credential stuffing (using breached credentials), password spraying (trying common passwords against many accounts), and brute force attacks against exposed services.
  • Supply chain compromise: Targeting third-party vendors, software providers, or managed service providers (MSPs) that have access to the target’s environment.
  • Cloud service exploitation: Targeting misconfigurations in cloud environments, such as overly permissive IAM policies, exposed storage buckets, or insecure serverless functions.

Physical Intrusion

For engagements that include physical testing:

  • Tailgating: Following authorized personnel through secure doors.
  • Badge cloning: Copying RFID/NFC access credentials using devices like Proxmark3 or Flipper Zero.
  • Lock bypass: Picking locks, bypassing electronic locks, or exploiting emergency release mechanisms.
  • Social engineering security staff: Convincing reception, guards, or employees to grant access through pretexting.
  • Rogue device placement: Installing network implants (such as a LAN Turtle or Raspberry Pi) on the internal network through physical access.

Initial Access Success Metrics

Professional red teams track detailed metrics during the initial access phase:

MetricWhat It Measures
Time to initial accessHours/days from first access attempt to successful foothold
Vector success ratePercentage of each attack vector that succeeded
Phishing click ratePercentage of targeted employees who clicked phishing links
Credential capture ratePercentage of targeted employees who submitted credentials
Detection ratePercentage of initial access attempts detected by the blue team
Alert response timeTime between alert generation and blue team response

Phase 4: Persistence

Once initial access is established, the red team implements persistence mechanisms to maintain access even if the initial entry point is discovered and remediated.

Persistence Techniques

Mapped to MITRE ATT&CK Tactic TA0003 (Persistence):

  • Scheduled tasks and cron jobs (T1053): Creating automated tasks that re-establish access at regular intervals.
  • Registry run keys and startup folder (T1547): Configuring malicious code to execute automatically on system startup.
  • Valid accounts (T1078): Creating new accounts or modifying existing ones to provide ongoing access.
  • Web shells (T1505.003): Deploying web-accessible backdoors on compromised web servers.
  • Implant deployment: Installing custom command-and-control implants designed to evade endpoint detection.
  • Golden/Silver Ticket attacks (T1558): Forging Kerberos tickets for persistent domain access.
  • Cloud persistence: Creating rogue service principals, OAuth applications, or federated identity providers in cloud environments.

Command and Control (C2)

The red team establishes command and control infrastructure to communicate with implants deployed in the target environment:

  • C2 framework selection: Choosing appropriate tools (Cobalt Strike, Brute Ratel C4, Sliver, Mythic) based on the engagement requirements and the target’s security controls.
  • Infrastructure setup: Configuring redirectors, domain fronting, and malleable C2 profiles to disguise C2 traffic as legitimate web browsing.
  • Communication protocols: Using HTTPS, DNS, or legitimate cloud services (Azure, AWS, Slack, Teams) for C2 communication to blend with normal traffic.
  • Fallback mechanisms: Establishing multiple independent C2 channels so that the loss of one channel does not terminate the engagement.

According to SANS Institute’s 2025 Offensive Tools Survey, 78% of professional red teams maintain at least two independent C2 channels during an engagement, and 45% maintain three or more.

Phase 5: Privilege Escalation

Privilege escalation is the process of elevating the red team’s access level from an initial low-privilege foothold to higher-level access — ultimately domain administrator or root access.

Local Privilege Escalation

Techniques for elevating privileges on a single system:

  • Unquoted service paths (T1574.009): Exploiting Windows services with unquoted executable paths.
  • DLL hijacking (T1574.001): Placing malicious DLLs in locations where privileged processes will load them.
  • Kernel exploits: Exploiting vulnerabilities in the operating system kernel to gain root/SYSTEM access.
  • Credential harvesting: Using tools like Mimikatz to extract passwords, hashes, and Kerberos tickets from memory.
  • Misconfigured services: Exploiting services running with excessive privileges or writable configuration files.

Domain Privilege Escalation

Techniques for escalating privileges within an Active Directory environment:

  • Kerberoasting (T1558.003): Requesting Kerberos service tickets for accounts with SPNs and cracking them offline.
  • AS-REP Roasting (T1558.004): Targeting accounts that do not require Kerberos pre-authentication.
  • ACL abuse: Exploiting misconfigured Active Directory access control lists to modify user attributes, reset passwords, or add users to groups.
  • Group Policy abuse: Modifying Group Policy Objects to execute code on domain-joined systems.
  • Certificate Services abuse: Exploiting misconfigured Active Directory Certificate Services (AD CS) to issue certificates for privileged accounts — a technique category that CREST’s 2025 report identifies as present in 67% of red team engagements against Active Directory environments.
  • Delegation abuse: Exploiting constrained or unconstrained Kerberos delegation to impersonate privileged users.

Cloud Privilege Escalation

  • IAM policy exploitation: Identifying and exploiting overly permissive IAM policies.
  • Role chaining: Chaining multiple roles or service accounts to escalate to administrative access.
  • Metadata service abuse: Accessing cloud instance metadata services to retrieve credentials and tokens.
  • Cross-account trust exploitation: Exploiting misconfigured cross-account trust relationships.

“Privilege escalation is where the red team’s patience and expertise are most clearly demonstrated. The difference between a junior tester and an expert operator is the ability to chain together subtle misconfigurations into a complete escalation path that automated tools would never identify.” — James Wei, Principal Red Team Consultant, CREST Fellow

Phase 6: Lateral Movement

Lateral movement is the process of expanding access across the target network, moving from system to system toward the engagement objectives.

Lateral Movement Techniques

Mapped to MITRE ATT&CK Tactic TA0008 (Lateral Movement):

  • Remote services (T1021): Using legitimate remote access protocols (RDP, SSH, WinRM, SMB) with compromised credentials.
  • Pass-the-hash (T1550.002): Authenticating to remote systems using NTLM password hashes without knowing the plaintext password.
  • Pass-the-ticket (T1550.003): Using stolen Kerberos tickets to authenticate to remote services.
  • WMI and PowerShell remoting: Using Windows management tools for remote command execution.
  • PsExec and SMB: Using Sysinternals tools or SMB-based execution for lateral movement.
  • DCOM (T1021.003): Using Distributed COM for remote code execution.
  • Internal spear-phishing: Using compromised email accounts to phish other employees with internal, trusted messages.

Avoiding Detection During Lateral Movement

Sophisticated red teams employ evasion techniques during lateral movement:

  • Living off the land (LOLBins): Using legitimate system binaries and tools rather than custom malware to avoid triggering behavioral detection.
  • Timestomping: Modifying file timestamps to avoid detection by forensic analysis.
  • Log manipulation: Clearing or modifying security logs (when authorized by ROE) to test log integrity monitoring.
  • Traffic blending: Operating during business hours and matching normal traffic patterns.
  • Process injection: Injecting code into legitimate processes to hide execution.
  • Memory-only operations: Avoiding disk writes by operating entirely in memory to evade endpoint detection.

Network Segmentation Testing

A critical aspect of lateral movement is testing the effectiveness of network segmentation:

  • Can the red team move from user networks to server networks?
  • Can the red team cross from IT environments into OT/ICS networks?
  • Are VLANs and firewall rules enforced correctly?
  • Can cloud environments be reached from on-premises networks?
  • Are trust relationships between Active Directory domains or forests properly secured?

According to Mandiant’s 2025 data, 68% of red team engagements successfully bypass at least one network segmentation control, demonstrating that segmentation implementation frequently does not match design intent.

Phase 7: Data Collection and Exfiltration

The data collection and exfiltration phase demonstrates the real-world impact of the red team’s access by identifying, collecting, and (where authorized) exfiltrating sensitive data.

Data Identification

The red team identifies high-value data aligned with engagement objectives:

  • Customer personal data (PII)
  • Financial records and payment card data
  • Intellectual property and trade secrets
  • Employee credentials and personal information
  • Strategic business documents
  • Healthcare records (PHI)
  • Source code and technical documentation

Exfiltration Techniques

Mapped to MITRE ATT&CK Tactic TA0010 (Exfiltration):

  • Exfiltration over C2 channel (T1041): Using the established command and control channel to transfer data.
  • Exfiltration over web services (T1567): Using legitimate cloud storage services (OneDrive, Google Drive, Dropbox) to stage and transfer data.
  • DNS exfiltration (T1048.003): Encoding data within DNS queries to bypass network monitoring.
  • Encrypted channels: Transferring data over encrypted protocols to evade DLP (Data Loss Prevention) inspection.
  • Physical exfiltration: Copying data to removable media during physical intrusion activities.
  • Steganography: Hiding data within image or document files to avoid detection.

Impact Demonstration

In some engagements, the red team may also demonstrate impact beyond data exfiltration:

  • Simulated ransomware deployment: Demonstrating the ability to deploy ransomware (without actually encrypting data) to test detection and response.
  • Business process disruption: Demonstrating the ability to modify, disrupt, or corrupt critical business processes.
  • Account manipulation: Demonstrating the ability to modify financial records, create fraudulent transactions, or manipulate business data.

All impact demonstrations are carefully controlled, documented, and aligned with the rules of engagement to prevent actual damage.

Phase 8: Reporting and Debrief

The reporting phase transforms the red team’s activities and findings into actionable intelligence that drives security improvement. The quality of the report determines the long-term value of the engagement.

Report Components

A professional red team report includes the following sections:

Executive summary (2–4 pages):

  • High-level narrative suitable for C-suite and board audiences.
  • Key findings and their business impact.
  • Overall risk assessment.
  • Strategic recommendations.

Attack narrative (10–20 pages):

  • Chronological account of the engagement from reconnaissance through objective achievement.
  • Detailed description of each phase, including techniques used, tools employed, and timestamps.
  • Screenshots, evidence, and proof-of-concept documentation.
  • Clear explanation of which activities were detected and which were not.

Detection and response assessment (5–10 pages):

  • Analysis of which red team activities triggered security alerts.
  • Evaluation of blue team response times, actions, and effectiveness.
  • Detection gap analysis mapped to MITRE ATT&CK techniques.
  • Recommendations for improving detection coverage.

Technical findings (variable length):

  • Each vulnerability and security gap identified, with:
    • Description and technical details
    • CVSS or risk severity rating
    • Business impact assessment
    • Evidence (screenshots, command output)
    • MITRE ATT&CK technique mapping
    • Specific remediation guidance

Recommendations (5–10 pages):

  • Prioritized remediation guidance organized by criticality and effort.
  • Quick wins (immediately actionable improvements).
  • Strategic improvements (longer-term architecture and process changes).
  • Detection engineering recommendations with specific detection logic.

Appendices:

  • Complete list of tools used.
  • IOCs (Indicators of Compromise) generated during the engagement for blue team review.
  • Detailed technical evidence and logs.
  • Rules of engagement documentation.

Debrief Sessions

Professional red team providers conduct multiple debrief sessions tailored to different audiences:

  • Technical debrief: Detailed walkthrough with the security team, SOC analysts, and IT operations. Focuses on technical findings, detection gaps, and specific remediation steps.
  • Executive debrief: Presentation for C-suite and board members. Focuses on business risk, strategic implications, and investment recommendations.
  • Purple team replay: Collaborative session where the red team re-executes specific techniques while the blue team works to implement and validate new detections.

According to CREST’s 2025 Quality Standards, 94% of accredited providers deliver reports within 15 business days of engagement conclusion, with the debrief session conducted within 5 business days of report delivery.

Report Quality Metrics

What separates an excellent red team report from a mediocre one:

Quality IndicatorGood ReportPoor Report
NarrativeTells a compelling story that non-technical decision-makers understandLists findings without context or narrative flow
Business impactConnects every finding to specific business consequencesUses only CVSS scores without business context
RecommendationsProvides specific, actionable remediation guidanceOffers generic advice (“patch your systems”)
ATT&CK mappingMaps all activities and findings to specific ATT&CK technique IDsDoes not reference ATT&CK or uses it inconsistently
Detection analysisProvides detailed analysis of what was and was not detectedDoes not assess detection capabilities
EvidenceIncludes full screenshots, logs, and proof-of-concept documentationMinimal or missing evidence

How Does Red Team Methodology Map to MITRE ATT&CK?

The following table maps the red team methodology phases described in this guide to the corresponding MITRE ATT&CK tactics:

Red Team PhaseMITRE ATT&CK Tactics
ReconnaissanceReconnaissance (TA0043)
Infrastructure SetupResource Development (TA0042)
Initial AccessInitial Access (TA0001), Execution (TA0002)
PersistencePersistence (TA0003), Command and Control (TA0011)
Privilege EscalationPrivilege Escalation (TA0004), Credential Access (TA0006)
Lateral MovementLateral Movement (TA0008), Discovery (TA0007), Defense Evasion (TA0005)
Data Collection/ExfiltrationCollection (TA0009), Exfiltration (TA0010)
Impact DemonstrationImpact (TA0040)

This mapping ensures that red team reporting can be directly consumed by blue teams and detection engineers who organize their detection capabilities around the ATT&CK framework.

What Are Rules of Engagement Best Practices?

Rules of engagement are the most critical non-technical element of red team methodology. Based on CREST and TIBER-EU guidance, best practices include:

  • Written and signed before testing begins: No exceptions. Verbal authorization is insufficient.
  • Specific and unambiguous: Clearly define what is in scope, what is out of scope, and what techniques are authorized.
  • Include emergency procedures: Define exactly how the red team can be stopped immediately if needed.
  • Address third-party systems: Explicitly state whether testing of third-party or shared services is authorized.
  • Define data handling requirements: Specify how sensitive data discovered during the engagement will be protected and destroyed.
  • Include deconfliction procedures: Establish how red team activity will be distinguished from real attacks if a genuine incident occurs during the engagement.
  • Legal review: Have the ROE reviewed by legal counsel on both sides.
  • Version control: Maintain version-controlled copies of the ROE, signed by authorized representatives.

How Does Methodology Differ by Engagement Type?

The core methodology applies across all red team engagements, but specific phases are adapted based on the engagement type:

PhaseStandard Red TeamTIBER-EU / CBESTAssumed BreachPurple Team
PlanningFullExtended (includes TI provider selection)SimplifiedCollaborative
ReconnaissanceFullIntelligence-led (separate TI provider)MinimalNot applicable
Initial AccessFullFull, based on TTI reportSkipped (access provided)Technique-specific
PersistenceFullFullFullTechnique-specific
Privilege EscalationFullFullFullTechnique-specific
Lateral MovementFullFullFullTechnique-specific
ExfiltrationObjective-dependentFullObjective-dependentNot applicable
ReportingStandardExtended (regulatory format)StandardDetection coverage matrix
Duration4–12 weeks10–16 weeks2–6 weeks2–5 days

For organisations considering their first red team engagement, an assumed breach approach is often the most cost-effective starting point, as it focuses testing resources on internal detection and response capabilities.

What Are the Key Statistics on Red Team Methodology?

  • 91% of CREST-accredited providers follow a formally documented methodology (CREST, 2025).
  • 47% more actionable findings from engagements using structured frameworks (CREST, 2025).
  • 73% of regulated organisations require CREST accreditation from providers (CREST, 2025).
  • 58% more high-severity findings from threat intelligence-led engagements vs. non-intelligence-led (ECB TIBER-EU Assessment, 2025).
  • 30–40% of engagement time is spent on reconnaissance by experienced teams (SANS, 2025).
  • 73% of successful initial access leverages intelligence from detailed reconnaissance (Mandiant, 2025).
  • 67% of engagements against Active Directory environments exploit certificate services misconfigurations (CREST, 2025).
  • 68% of engagements successfully bypass at least one network segmentation control (Mandiant, 2025).
  • 78% of red teams maintain at least two independent C2 channels (SANS, 2025).
  • 94% of CREST providers deliver reports within 15 business days (CREST, 2025).

Frequently Asked Questions

How long does each phase typically take?

Phase durations vary significantly based on engagement scope and complexity. As a general guideline for a standard 8-week engagement: Planning (1–2 weeks pre-engagement), Reconnaissance (2–3 weeks), Initial Access (1–2 weeks), Post-Exploitation (2–3 weeks including persistence, escalation, lateral movement, and exfiltration), Reporting (1–2 weeks). These phases often overlap — for example, reconnaissance may continue throughout the engagement.

What happens if the red team is detected?

If the blue team detects the red team, the engagement does not necessarily end. The red team may attempt to adapt — changing techniques, rotating infrastructure, or pivoting to different attack vectors — just as a real attacker would. Detection is documented as a blue team success, and the red team’s ability (or inability) to continue operating post-detection provides valuable insight.

How do you ensure safety during a red team engagement?

Safety is maintained through thorough rules of engagement, constant communication with the trusted agent, use of proven tools and techniques, careful testing against sensitive systems, and emergency stop procedures. Professional red teams conduct internal risk assessments before attempting any technique that could potentially impact system availability.

Can red team methodology be automated?

Elements of the methodology can be automated — particularly reconnaissance, vulnerability scanning, and some initial access techniques. Breach and Attack Simulation (BAS) platforms automate specific ATT&CK technique execution. However, the creative, adaptive aspects of red teaming — crafting convincing social engineering pretexts, chaining together unexpected attack paths, adapting to blue team responses — require human expertise. According to NIST SP 800-53 Rev. 5, automated testing should complement but not replace human-led red team exercises.

How does the methodology differ for cloud environments?

Cloud red teaming follows the same fundamental phases but emphasizes cloud-specific techniques: IAM policy analysis, service enumeration, metadata service exploitation, cross-account pivoting, serverless function abuse, and container escape. The reconnaissance phase focuses heavily on cloud infrastructure discovery using tools like ScoutSuite, Pacu, and ROADtools. Persistence mechanisms target cloud-native constructs like service principals, OAuth applications, and federation trusts.

What certifications validate red team methodology expertise?

The most recognized certifications include CREST CCSAM (Certified Simulated Attack Manager) for engagement leads, CREST CCSAS (Certified Simulated Attack Specialist) for senior operators, OSEP (Offensive Security Experienced Penetration Tester), OSCE3 (Offensive Security Certified Expert 3), GXPN (GIAC Exploit Researcher and Advanced Penetration Tester), and CRTO (Certified Red Team Operator). For TIBER-EU engagements, providers must hold specific national accreditation.

How do you select techniques for a specific engagement?

Technique selection is driven by threat intelligence. The red team identifies the most relevant threat actors for the target organization, analyzes their documented TTPs (using MITRE ATT&CK’s threat group profiles and threat intelligence reports), and builds an adversary emulation plan that replicates those specific techniques. This approach ensures the engagement tests the defenses most relevant to the organization’s actual threat landscape.

How is red team methodology evolving?

Key trends include AI-augmented operations (using LLMs for reconnaissance analysis, phishing content generation, and code development), increased focus on cloud and identity-based attack paths, integration of OT/ICS testing into standard methodology, continuous red teaming models that maintain persistent adversarial pressure, and the emergence of automated adversary emulation platforms that complement human-led operations.

The Bottom Line

Methodology separates professional red teaming from ad hoc hacking. MITRE ATT&CK provides the technique language. The Kill Chain provides the attack lifecycle model. CREST and TIBER-EU provide the professional standards. Without a structured approach, you get inconsistent results, missed attack paths, and reports that gather dust.

The 8-phase framework in this guide is what we follow across 500+ engagements. It is field-tested, framework-aligned, and designed to produce findings that your blue team can actually act on.