Social engineering is the manipulation of people into performing actions or revealing confidential information. 60% of breaches involve the human element (Verizon DBIR, 2025). In red team engagements, social engineering is not one tool among many. It is the primary entry point. When authorised as part of the scope, social engineering attacks achieve initial access in 85 to 90% of engagements across the Swiss financial sector. Technical security investment means little when the human layer remains open.

What Is Social Engineering and Why Is It Central to Red Teaming?

Social engineering exploits the fundamental human tendencies of trust, helpfulness, fear, and urgency to bypass security controls that technology alone cannot enforce. While organisations invest billions annually in firewalls, endpoint detection, and encryption, the human element remains the weakest link in the security chain — a reality that both real-world attackers and professional red teams consistently exploit.

In the context of red teaming, social engineering serves several critical functions:

  • Initial Access: Gaining the first foothold in a target environment, often through phishing emails that deliver malware or harvest credentials
  • Intelligence Gathering: Extracting information from employees about internal systems, processes, and security controls through conversation and pretexting
  • Physical Access: Gaining entry to secure facilities through tailgating, impersonation, or manipulation of access control procedures
  • Privilege Escalation: Convincing employees with elevated access to perform actions or share credentials that enable deeper network penetration
  • Persistence: Establishing ongoing relationships with insiders that provide continued access over the engagement period

“The best firewall in the world cannot stop an employee from clicking a link in a carefully crafted email that appears to come from their CEO. Social engineering attacks target the one component of any security architecture that cannot be patched: human psychology.” — Kevin Mitnick, Former Hacker and Security Consultant

Key statistics that underscore the importance of social engineering in cybersecurity:

  • 60% of breaches involved the human element (Verizon DBIR, 2025)
  • 16% of breaches begin with phishing (Verizon DBIR, 2025)
  • $4.76 million: Average cost of a breach originating from social engineering (IBM Cost of a Data Breach Report, 2024)
  • 12 minutes: Median time for the first employee to click a phishing link in simulated campaigns (KnowBe4, 2025)
  • 350% increase in business email compromise (BEC) attacks between 2020 and 2025 (FBI IC3)
  • 68% of organisations experienced at least one successful social engineering attack in the past year (SANS, 2024)

What Are the Primary Social Engineering Techniques?

Red teams employ a diverse arsenal of social engineering techniques, each targeting different psychological vulnerabilities and communication channels. Understanding these techniques is essential for both offensive operators and defenders.

Phishing

Phishing is the most widespread social engineering technique, using deceptive emails to trick recipients into clicking malicious links, opening infected attachments, or entering credentials on spoofed websites. In red team engagements, phishing campaigns are meticulously crafted to target specific individuals or groups within the organization.

Spear Phishing: Highly targeted emails directed at specific individuals, using personal information gathered during reconnaissance to increase credibility. Red teams research targets’ LinkedIn profiles, social media activity, professional affiliations, and recent activities to craft convincing pretexts.

Whaling: Spear phishing directed specifically at senior executives and board members. These attacks typically impersonate trusted business partners, legal counsel, or regulatory bodies, and often involve urgent financial or compliance-related pretexts.

Clone Phishing: Replicating legitimate emails the target has previously received (such as delivery notifications, invoice reminders, or internal communications) with malicious modifications. This technique leverages existing trust and familiarity with the communication pattern.

Vishing (Voice Phishing)

Vishing uses telephone calls to manipulate targets. Red team operators often impersonate IT support staff, bank representatives, or senior management to extract information or convince targets to perform specific actions.

A typical red team vishing scenario might involve:

  1. Calling a target employee while impersonating IT helpdesk
  2. Referencing a “security incident” requiring immediate credential verification
  3. Directing the target to a spoofed authentication portal
  4. Capturing credentials as they are entered

Vishing is particularly effective because voice communication conveys urgency and authority more powerfully than email, and recipients have less time to evaluate the legitimacy of the request.

Smishing (SMS Phishing)

Smishing uses text messages as the attack vector. With the proliferation of mobile device use in corporate environments and the trend toward SMS-based multi-factor authentication, smishing has become increasingly relevant for red teams.

Red teams may send SMS messages impersonating:

  • Corporate IT systems requesting password resets
  • Delivery services requiring package confirmation
  • Banks alerting to suspicious activity
  • MFA bypass attempts using real-time phishing proxies

Pretexting

Pretexting involves creating a fabricated scenario (the pretext) that provides a plausible reason for the social engineer to interact with the target and request information or actions. Unlike phishing, which typically relies on a single interaction, pretexting often involves building a relationship over multiple interactions.

Pretext ScenarioTarget RoleObjectiveSuccess Rate*
New employee needing helpIT SupportSystem access, credentials70-80%
External auditorFinance, ComplianceDocument access, process info60-75%
Vendor support technicianIT OperationsPhysical/remote system access55-70%
Executive assistantAdministrative staffSchedule info, access badges65-75%
Building maintenanceFacilities, ReceptionPhysical access to secure areas60-70%

*Estimated success rates from aggregated red team engagement data

Baiting

Baiting involves offering something enticing to the target to encourage a specific action. In physical security testing, this often takes the form of USB drives loaded with malicious payloads left in parking lots, lobbies, or common areas. In digital contexts, baiting might involve offering free software, media files, or documents that contain malware.

According to a 2024 study by CompTIA, 45% of employees who found a USB drive in their workplace plugged it into a company computer — despite most having received security awareness training that specifically warned against this behavior.

Tailgating and Piggybacking

Tailgating (or piggybacking) involves following an authorized person through a secure door or access point without presenting valid credentials. This technique exploits social norms of politeness — most people will hold a door open for someone walking closely behind them, especially if that person appears to be a fellow employee.

Red teams often combine tailgating with pretexting, wearing appropriate attire (business suits, maintenance uniforms, delivery company clothing) and carrying props (boxes, clipboards, tool bags) that provide a visual pretext for their presence.

Quid Pro Quo

Quid pro quo attacks offer a service or benefit in exchange for information or access. A common red team scenario involves calling employees and offering to help with a technical problem (real or fabricated) in exchange for login credentials or remote access.

“In our red team engagements, we consistently find that quid pro quo attacks targeting helpdesk and IT support staff are among the most effective techniques. These professionals are trained to be helpful, and that training can be weaponized by a skilled social engineer who presents a convincing problem requiring their assistance.” — Chris Hadnagy, CEO, Social-Engineer, LLC

How Do Red Teams Plan Social Engineering Campaigns?

Professional red team social engineering campaigns follow a structured methodology that maximizes effectiveness while maintaining ethical boundaries and legal compliance. The planning process is as rigorous as any technical exploitation operation.

Phase 1: Reconnaissance and Target Research

The foundation of any social engineering campaign is thorough intelligence gathering. Red teams spend significant time (typically 2-4 weeks) researching the target organization and individual targets before launching any attacks.

Open Source Intelligence (OSINT) Collection:

  • Corporate Information: Organizational structure, recent news, mergers, office locations, technology stack (job postings are invaluable for this)
  • Employee Information: Names, roles, email formats, phone numbers, LinkedIn profiles, social media activity
  • Technical Intelligence: Email security infrastructure (SPF, DKIM, DMARC records), web filtering solutions, VPN technologies
  • Cultural Intelligence: Communication style, internal jargon, dress code, security culture indicators

Pretext Development: Based on reconnaissance findings, the red team develops pretexts that align with the target organization’s context. A pretext targeting a Swiss bank will differ significantly from one targeting a manufacturing company — the language, urgency triggers, authority references, and communication norms must all be tailored.

Phase 2: Infrastructure Preparation

Red teams build sophisticated technical infrastructure to support their social engineering campaigns:

  • Domain Registration: Registering domains that closely mimic the target’s legitimate domains (typosquatting, homoglyph domains)
  • Email Infrastructure: Configuring mail servers with proper SPF, DKIM, and DMARC records to maximize email deliverability
  • Phishing Platforms: Setting up credential harvesting pages, payload delivery systems, and tracking infrastructure
  • Communication Systems: Preparing VoIP systems for vishing with spoofed caller IDs, burner phones for smishing
  • Physical Props: Creating fake badges, business cards, uniforms, and documentation for physical social engineering

Phase 3: Campaign Execution

Campaign execution follows a phased approach, typically starting with broader, less targeted attacks and progressing to highly targeted operations:

Wave 1: Mass Phishing (if authorized) Broad-based phishing emails sent to larger groups to identify susceptible individuals and test organizational detection capabilities.

Wave 2: Targeted Spear Phishing Highly customized emails targeting specific individuals identified during reconnaissance or Wave 1 as potentially susceptible.

Wave 3: Vishing and Multi-Channel Attacks Follow-up phone calls, SMS messages, and combined attacks that build on earlier interactions. For example, sending an email followed by a phone call “confirming” the email’s legitimacy.

Wave 4: Physical Social Engineering (if in scope) On-site operations including tailgating, impersonation, and physical reconnaissance of secure areas.

Phase 4: Exploitation and Pivoting

When social engineering succeeds in obtaining credentials or deploying malware, the red team transitions to technical exploitation — using the access gained to move deeper into the network. This is where social engineering intersects with traditional red team tradecraft.

For organisations seeking professional social engineering testing as part of full-scope red team engagements, RedTeamPartner.com offers expertise in designing and executing realistic social engineering campaigns calibrated to Swiss and European business environments.

How Do You Measure Social Engineering Campaign Effectiveness?

Measuring the effectiveness of social engineering operations requires both quantitative metrics and qualitative analysis. Red teams track detailed metrics throughout their campaigns.

Quantitative Metrics

MetricDescriptionTypical Benchmark
Email Open RatePercentage of targets who opened the phishing email50-70%
Click-Through RatePercentage who clicked the malicious link15-30%
Credential Submission RatePercentage who entered credentials on spoofed pages8-20%
Payload Execution RatePercentage who executed malicious payloads5-15%
Reporting RatePercentage who reported the phishing attempt5-15%
Time to First ClickTime from email delivery to first click1-16 minutes
Time to First ReportTime from email delivery to first report to security30 min - 4 hours
Vishing Success RatePercentage of calls achieving the objective30-50%
Physical Access Success RatePercentage of physical entry attempts succeeding40-70%

Qualitative Assessment

Beyond raw numbers, red teams evaluate:

  • Detection Capability: Did the security team detect the campaign? How quickly?
  • Response Effectiveness: Once detected, how effectively was the campaign contained?
  • Communication: Did employees report suspicious activity? Were reports acted upon?
  • Policy Adherence: Did employees follow established security procedures?
  • Cultural Factors: What organizational or cultural factors contributed to successes and failures?

Social engineering testing operates in a sensitive legal and ethical space that requires careful navigation. Unlike technical penetration testing, social engineering directly involves and affects real people, raising important considerations around consent, privacy, and psychological impact.

Authorization and Scope: All social engineering activities must be explicitly authorized in the engagement’s scope of work and rules of engagement. This authorization should specify:

  • Which techniques are permitted (phishing, vishing, physical, etc.)
  • Which individuals or groups can be targeted
  • Which facilities can be tested
  • What information can be collected and how it must be handled
  • What actions are explicitly prohibited

Data Protection: In Switzerland and the EU, social engineering testing must comply with data protection regulations including the Swiss Federal Act on Data Protection (FADP/DSG) and GDPR. Personal data collected during social engineering campaigns — including email addresses, phone numbers, and behavioral data (who clicked, who submitted credentials) — must be processed lawfully and deleted according to agreed timelines.

Employment Law: Social engineering test results that identify specific individuals as “failing” the test raise employment law considerations. Results should be used for organizational improvement, not individual punishment. Red teams typically anonymize individual results in their reports unless the client specifically requires and has legal grounds for individual identification.

Ethical Boundaries

Professional red teams adhere to strict ethical guidelines:

  • No exploitation of genuine personal crises: Do not use knowledge of an employee’s personal difficulties (health issues, family problems, financial stress) as pressure
  • No creation of lasting psychological distress: Campaigns should not cause undue fear or anxiety beyond the immediate interaction
  • No targeting of individuals known to be particularly vulnerable: Consider accommodations for employees with known anxiety disorders or similar conditions
  • Proportionality: The intensity of the social engineering must be proportionate to the engagement’s objectives and the organization’s maturity
  • Immediate disclosure for safety: If a social engineering interaction reveals a genuine safety or welfare concern, it should be reported immediately through appropriate channels

For guidance on how Swiss regulations intersect with social engineering testing requirements, CybersecuritySwitzerland.ch provides full analysis of the legal landscape.

How Can Organizations Defend Against Social Engineering?

Defending against social engineering requires a multi-layered approach that combines technology, training, and organizational culture. No single control is sufficient; effective defense demands depth.

Security Awareness Training

Security awareness training remains the primary defensive measure against social engineering, but its effectiveness varies dramatically based on implementation quality.

What Works:

  • Continuous, regular training rather than annual compliance exercises — organisations running monthly micro-training sessions see 60% lower phishing click rates than those relying on annual training alone (KnowBe4, 2025)
  • Simulated phishing campaigns that provide immediate, constructive feedback when employees click
  • Role-specific training that addresses the particular social engineering risks relevant to each department
  • Positive reinforcement that rewards reporting behavior rather than punishing failures
  • Real-world examples that demonstrate social engineering in context, including recent attacks on similar organisations

What Does Not Work:

  • Annual “check-the-box” training with generic content
  • Punitive approaches that shame or discipline employees who fail simulated tests
  • Training that focuses exclusively on email phishing while ignoring vishing, physical access, and other vectors
  • One-size-fits-all programs that do not account for different roles and risk levels

“The most effective security awareness programs are those that create a genuine security culture where reporting suspicious activity is celebrated, not where failing a test is punished. When employees are afraid of repercussions, they stop reporting, and that is far more dangerous than any individual phishing click.” — Dr. Jessica Barker, Co-CEO, Cygenta

Technical Controls

While social engineering targets humans, technology can significantly reduce the attack surface and limit the impact of successful attacks:

Email Security:

  • Advanced email filtering with AI/ML-based phishing detection
  • DMARC, DKIM, and SPF enforcement to prevent domain spoofing
  • External email banners that flag messages from outside the organization
  • Link rewriting and sandboxed URL analysis
  • Attachment sandboxing and content disarm and reconstruction (CDR)

Identity and Access:

  • Multi-factor authentication (MFA) that is resistant to real-time phishing (FIDO2/WebAuthn)
  • Conditional access policies that evaluate risk signals before granting access
  • Privileged access management (PAM) to limit the value of compromised credentials
  • Zero-trust network architecture that limits lateral movement even after successful social engineering

Physical Security:

  • Mantrap/airlock entry systems that prevent tailgating
  • Visitor management systems with identity verification
  • Security cameras and monitoring at access points
  • Clear desk policies to reduce information exposure
  • Badge-checking culture reinforced by security personnel

Organizational Culture

The strongest defense against social engineering is an organizational culture that prioritizes security awareness without creating a culture of fear:

  • Empowered Reporting: Make it easy and rewarding to report suspicious contacts. A dedicated “phish alert” button in email clients reduces friction and increases reporting rates by up to 300% (Cofense, 2024)
  • Leadership Example: When executives visibly practice and advocate for security behaviors, the entire organization follows
  • Blameless Post-Incidents: Treat successful social engineering as a learning opportunity, not a disciplinary matter
  • Regular Communication: Share (appropriately sanitized) examples of social engineering attempts that targeted the organization
  • Cross-Departmental Collaboration: Security teams should work closely with HR, communications, and facilities to create layered defence

For Swiss organisations seeking to build robust social engineering defense programs, AlpineExcellence.ch provides advisory services that integrate technical controls with organizational culture development tailored to Swiss business practices.

How Do Advanced Red Team Social Engineering Operations Work in Practice?

To illustrate the sophistication of professional social engineering operations, consider the following composite scenario drawn from real-world red team engagement patterns (details altered for confidentiality).

Scenario: Operation “Alpine Trust”

Objective: Gain access to a Swiss financial institution’s internal trading platform as part of a TIBER-CH engagement.

Phase 1 — Reconnaissance (Weeks 1-3): The red team identified a target institution employee (“Target A”) through LinkedIn — a mid-level IT administrator who had recently posted about completing a cloud migration project. OSINT revealed Target A regularly attended a local tech meetup group and had published a blog post about their experience with a specific cloud platform.

Phase 2 — Relationship Building (Weeks 3-5): A red team operator created a believable persona as a cloud consultant, connected with Target A on LinkedIn, and engaged in professional conversations about cloud security. The operator attended the same tech meetup, establishing face-to-face rapport.

Phase 3 — Pretext Execution (Week 6): The operator sent Target A an email from a spoofed domain resembling a well-known cloud platform provider, inviting them to a “private beta” of a new security tool relevant to their cloud migration work. The email contained a link to a convincing but malicious authentication portal.

Phase 4 — Credential Harvesting and Access (Week 6-7): Target A entered their corporate credentials on the spoofed portal. The red team used these credentials, combined with a real-time MFA interception technique, to access the corporate VPN. From there, the team used the IT administrator’s elevated privileges to move laterally toward the trading platform.

Outcome: The engagement demonstrated that despite strong technical controls (next-generation firewalls, EDR, network segmentation), a well-researched social engineering campaign could bypass the outer perimeter through a single trusted employee. The institution subsequently invested in phishing-resistant MFA (FIDO2 hardware tokens), enhanced their security awareness program with role-specific social engineering scenarios, and implemented stricter network segmentation around critical trading systems.

What Industry Statistics Define the Current Social Engineering Threat Landscape?

The following statistics from leading industry sources paint a full picture of the social engineering threat:

SourceStatisticYear
Verizon DBIR60% of breaches involved the human element2025
Verizon DBIR16% of breaches begin with phishing2025
IBM$4.76M average breach cost from social engineering2024
KnowBe412-minute median time to first phishing click2025
FBI IC3$2.9B in BEC losses reported in 20232024
SANS68% of organisations hit by social engineering in past year2024
CofenseOnly 11% of phishing emails are reported by employees2024
CompTIA45% of found USB drives are plugged into work computers2024
TessianAverage employee receives 14 malicious emails per year2024
CrowdStrikeSocial engineering is #1 initial access vector for eCrime2025

These statistics consistently confirm that social engineering remains the most prevalent and effective attack vector, and that organizational defenses remain inadequate despite significant awareness and investment.

Frequently Asked Questions About Social Engineering in Red Teaming

Is social engineering legal in a red team engagement? Yes, when properly authorized. Social engineering must be explicitly included in the engagement scope, rules of engagement, and legal agreements. The authorization should come from an appropriate authority within the target organization (typically the CISO or equivalent), and the engagement must comply with applicable data protection and employment laws.

How do red teams handle employees who become distressed during social engineering? Professional red teams are trained to de-escalate and disengage if a target becomes visibly distressed. The rules of engagement should include clear guidance on when to terminate a social engineering interaction. In vishing scenarios, operators are trained to recognize signs of distress and end the call with a plausible, non-alarming exit. Post-engagement debriefings should be handled sensitively.

Can social engineering testing results be used to fire employees? This is strongly discouraged and may raise legal issues in many jurisdictions, including Switzerland. Industry best practice and most engagement agreements specify that results should be used for organizational improvement, not individual disciplinary action. Aggregated, anonymized results are typically reported to demonstrate organizational vulnerability patterns rather than individual failures.

What is the difference between social engineering in red teaming versus phishing simulation? Phishing simulation programs (like those from KnowBe4, Proofpoint, or Cofense) are ongoing, typically automated programs designed to train employees through regular simulated phishing emails. Red team social engineering is a targeted, time-limited operation designed to test organizational defenses by simulating realistic attack scenarios. Red team social engineering is more sophisticated, uses multiple vectors (not just email), involves custom reconnaissance and pretexting, and aims to achieve specific access objectives rather than simply measuring click rates.

How often should social engineering testing be conducted? Full red team social engineering engagements should be conducted at least annually for high-risk organisations. Phishing simulation programs should run continuously with at least monthly simulated emails. The frequency should be balanced against “phishing fatigue” — testing too frequently with similar scenarios can desensitize employees and reduce the training value.

What role does AI play in modern social engineering? AI is increasingly used by both attackers and defenders. Attackers leverage large language models to generate more convincing phishing emails, deepfake audio for vishing, and automated OSINT collection. Defenders use AI for advanced email filtering, behavioral analysis to detect anomalous communications, and automated phishing simulation programs. Red teams are incorporating AI tools to increase the scale and sophistication of their campaigns while maintaining the bespoke, intelligence-led approach that distinguishes professional red teaming from commodity phishing simulation.

Social engineering remains the most human of cybersecurity challenges. Defending against it requires not just technology and training but a fundamental understanding of human psychology, organizational culture, and the creative determination of adversaries who view every person in an organization as a potential entry point.

Sources

  1. Verizon 2025 Data Breach Investigations Report — confirms ~60% of breaches involve the human element; phishing at ~16% of breaches
  2. FBI IC3 2023 Internet Crime Report — confirms $2.9B in BEC losses for 2023
  3. IBM Cost of a Data Breach Report 2024 — confirms $4.88M overall average breach cost for 2024