Cloud Security Office Hours Banner

Cloud Breach Kill Chains

Real attacks. Real post-mortems. Step-by-step attack progression mapped to MITRE ATT&CK Cloud β€” so you can understand exactly what happened, and stop it next time.

MITRE ATT&CK Cloud Mapped Full Attack Chain Per Incident Official Post-Mortems Sourced
March–July 2019 Critical AWS

Capital One – SSRF β†’ IMDSv1 β†’ Over-Privileged IAM Role β†’ 106M Record S3 Exfiltration

A former AWS engineer exploited a misconfigured WAF via server-side request forgery to reach the EC2 instance metadata service, stealing temporary IAM role credentials. An over-privileged role then granted access to 700+ S3 buckets containing 106 million customer records. The attack ran undetected for 77 days and directly caused AWS to build IMDSv2.

106M records exposed
77 days dwell time
$80M civil penalty
Threat actor: Paige Thompson, former AWS employee
πŸ“„ KrebsOnSecurity post-mortem β†— πŸ“„ Appsecco technical analysis β†— πŸ“„ AWS IMDSv2 response β†—
πŸ” Reconnaissance
01
Automated scanning for AWS-hosted apps vulnerable to SSRF
T1595 – Active Scanning

Thompson built a custom tool to scan the internet for EC2-hosted web applications that would relay requests to the AWS instance metadata service at 169.254.169.254. SSRF was not in ModSecurity's default detection rule set β€” it had to be explicitly configured.

Target: Public-facing ModSecurity WAF running on EC2
Why SSRF wasn't blocked: Not in default WAF rules β€” required manual configuration
Scanning approach: Automated β€” targeted multiple AWS-hosted organisations
SSRFEC2WAFModSecurity
⚑ Initial Access
02
SSRF exploitation β€” WAF tricked into relaying requests to IMDS
T1190 – Exploit Public-Facing App

The WAF was misconfigured β€” running in logging-only mode or bypassable β€” so Thompson sent crafted HTTP requests containing the IMDS endpoint as the target URL. The WAF relayed these server-side, making the EC2 instance itself issue the metadata request.

SSRF payload target: http://169.254.169.254/latest/meta-data/
IMDSv1 behaviour: No authentication β€” any GET request from the instance is served
WAF failure: Relayed SSRF payload rather than blocking it
SSRFIMDSv1T1190169.254.169.254
πŸ”‘ Credential Access
03
EC2 IAM role credentials retrieved from IMDS β€” no auth required
T1552.005 – Cloud Instance Metadata API

IMDSv1 returned the temporary AWS credentials (AccessKeyId, SecretAccessKey, SessionToken) for the "ISRM-WAF-Role" attached to the EC2 instance. No token, header, or authentication required β€” just a GET request to the metadata path from within the instance (which the SSRF provided).

IMDS path: /latest/meta-data/iam/security-credentials/ISRM-WAF-Role
Credentials returned: AccessKeyId + SecretAccessKey + SessionToken
Key problem: The role had S3 permissions far beyond what a WAF needs
IMDSv1ISRM-WAF-RoleTemp CredentialsT1552.005
πŸ”Ό Privilege Abuse β€” Over-Permissioned IAM Role
04
700+ S3 buckets enumerated using stolen role credentials
T1619 – Cloud Storage Object Discovery

With the stolen AWS credentials, Thompson used the CLI to list all S3 buckets accessible to the ISRM-WAF-Role. The role had been granted sweeping S3 list and read permissions β€” far beyond what a WAF firewall function ever needed β€” violating least privilege at the design level.

Command: aws s3 ls (authenticated with stolen session credentials)
Result: 700+ buckets listed including Capital One customer data stores
Root failure: IAM role permissions never reviewed against principle of least privilege
IAMS3Least Privilege ViolationT1619
πŸ“€ Exfiltration
05
30GB bulk S3 exfiltration β€” 106M customer records
T1530 – Data from Cloud Storage

Thompson synced S3 bucket contents to external storage using aws s3 sync. Approximately 30GB over multiple sessions β€” 100M US and 6M Canadian credit card application records, 140,000 SSNs, 80,000 bank account numbers, credit scores, and transaction history.

Tool: aws s3 sync
Data exfiltrated: 106M records, 140K SSNs, 80K bank accounts, credit/financial history
Detection gap: GuardDuty not enabled; S3 access logs not monitored for volume anomalies
aws s3 syncPIIT1530Financial Data
🚨 Discovery
06
Discovered 77 days later β€” Thompson bragged about it on GitHub and IRC

Thompson bragged about the breach on GitHub and in Slack and IRC channels under the handle "erratic." A member of the public noticed the posts, reviewed the data, and filed a responsible disclosure with Capital One on July 17, 2019. No internal monitoring β€” not GuardDuty, not S3 access logs, not IAM anomaly detection β€” caught the breach during the 77-day dwell period.

Discovery method: External tipster via Capital One responsible disclosure program
Monitoring failures: No GuardDuty Β· No S3 volume alerts Β· No anomalous IAM activity detection
Arrest: Paige Thompson, July 29, 2019
77 Day DwellNo Internal DetectionExternal Tipster

πŸ›‘ How to Defend Against This Chain

Enforce IMDSv2 on all EC2 instances. IMDSv2 requires a session token obtained via a PUT request β€” SSRF attacks that can only issue GET requests cannot obtain credentials. AWS defaults new instances to IMDSv2 and you can enforce it via SCP across your organisation.
Apply least-privilege IAM to every role. A WAF role should only write WAF logs β€” not list or read S3. Use IAM Access Analyzer findings and Access Advisor unused-permission reports to identify and tighten over-privileged roles.
Enable Amazon GuardDuty. GuardDuty's UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration finding specifically detects role credentials being used from outside the EC2 instance that issued them.
Alert on anomalous S3 access volume. Large-scale GetObject or ListBucket calls from an unexpected IP or at an unusual rate should trigger a CloudWatch + SNS alert immediately.
Add SSRF rules to your WAF. Block requests containing 169.254.169.254 in URL, body, or headers. This is not in default WAF configurations β€” it must be explicitly added.
September 15–16, 2022 Critical AWS GCP

Uber – Dark Web Creds β†’ MFA Push Fatigue β†’ Hardcoded PAM Secret β†’ Full AWS/GCP Admin

An 18-year-old attacker purchased an Uber contractor's VPN credentials from a dark web infostealer marketplace, then used MFA push-bombing combined with WhatsApp social engineering to bypass two-factor auth. Once inside the corporate network, they found a PowerShell script with hardcoded admin credentials for Thycotic β€” Uber's PAM system β€” unlocking full admin access to AWS, GCP, Slack, SentinelOne, HackerOne, and more within hours.

Full admin on AWS, GCP, Slack, SentinelOne, HackerOne
<24 hours initial access to full compromise
Attacker: 18-year-old, allegedly LAPSUS$-affiliated
πŸ“„ Uber official security update β†— πŸ“„ GitGuardian technical analysis β†—
πŸ›’ Initial Access β€” Purchased Credentials
01
Contractor VPN credentials purchased from dark web infostealer log
T1078 – Valid Accounts

The targeted contractor's device had previously been infected with infostealer malware, which exfiltrated saved browser credentials to a dark web marketplace. The attacker purchased the username and password β€” no zero-day or technical exploit required.

Source: Infostealer log purchased from dark web forum
Credential type: Uber contractor corporate VPN credentials
Defence gap: No monitoring for credential leakage Β· Third-party device not enrolled in MDM or health-checked before VPN access
InfostealerDark WebT1078Third-Party Risk
πŸ“± MFA Bypass β€” Push Fatigue + Social Engineering
02
MFA push-bombing campaign + WhatsApp impersonation of Uber IT
T1621 – MFA Request Generation

The attacker repeatedly attempted VPN login, flooding the contractor's phone with MFA push notifications. After approximately an hour of notifications, they contacted the contractor on WhatsApp claiming to be Uber IT support and stating the only way to stop the notifications was to approve one. The contractor complied.

Technique: MFA push fatigue ("push bombing") for ~1 hour
Social engineering: WhatsApp message: "I'm from Uber IT. Accept the push to stop the notifications."
MFA type: Push notification (not phishing-resistant FIDO2)
Why it worked: No number-matching Β· No limit on push attempt rate
MFA FatiguePush BombingWhatsApp SET1621
πŸ”Œ Internal Recon
03
VPN established; contractor-level access used to enumerate internal network shares
T1046 – Network Service Discovery

Once the contractor approved the push, the attacker connected to Uber's corporate VPN and began scanning the internal network. Internal infrastructure had no micro-segmentation β€” a contractor VPN account could reach all internal file shares.

Access level at this point: Contractor VPN (low privilege)
Recon target: Internal file shares and intranet services
Defence gap: No east-west network segmentation Β· Contractor VPN had broad internal network visibility
VPNInternal ReconNo SegmentationT1046
πŸ’€ Critical Discovery β€” Hardcoded Secret
04
PowerShell script on internal share contained hardcoded Thycotic PAM admin credentials
T1552.001 – Credentials in Files

On an internal network share accessible via the contractor VPN, the attacker found a PowerShell script containing plaintext admin credentials for Thycotic β€” Uber's Privileged Access Management platform. This single file became the skeleton key to every system in the organisation.

Location: Internal network share (contractor VPN accessible)
Contents: Hardcoded Thycotic domain admin username + password in plaintext
Irony: Thycotic was the PAM system specifically designed to prevent hardcoded secrets
Root cause: Automation script needed PAM API access but used a static credential instead of a scoped service account
Hardcoded CredentialsPowerShellT1552.001Secrets Sprawl
πŸ‘‘ Total Compromise β€” Keys to the Kingdom
05
Thycotic admin access β†’ all secrets extracted β†’ full cloud and SaaS compromise
T1078.004 – Cloud Accounts

Using the admin credentials, the attacker logged into Thycotic and extracted all stored secrets. Thycotic was the single source of truth for credentials across Uber's entire cloud and SaaS footprint.

Systems fully compromised:
β†’ AWS (cloud infrastructure admin)
β†’ GCP + Google Workspace (admin)
β†’ Slack workspace (admin β€” used to announce breach to all Uber employees)
β†’ SentinelOne XDR (admin β€” ability to suppress security alerts)
β†’ HackerOne admin console (access to private vulnerability reports)
β†’ DUO, OneLogin, VMware vSphere, Uber internal dashboards
PAM TakeoverAWS AdminGCP AdminFull Compromise
🚨 Discovery
06
Attacker self-announces the breach in Uber's company Slack

The attacker used their Slack admin access to broadcast a message to all Uber employees announcing the breach, then posted screenshots on Twitter under "teapotuberhacker." Uber's security team discovered the breach within hours β€” not through monitoring, but through the attacker's announcement.

Discovery method: Attacker self-announced in company-wide Slack
Key gap: No alert fired on new AWS admin account creation Β· No alert on PAM admin login from unknown device
SentinelOne access meant: The attacker could have suppressed EDR alerts to cover tracks
Self-AnnouncedNo Internal DetectionSlack Admin Abuse

πŸ›‘ How to Defend Against This Chain

Deploy phishing-resistant MFA (FIDO2 / hardware security keys). Push notification MFA is defeated by fatigue attacks. Number-matching β€” requiring users to match a code shown on screen β€” stops automated bombing. FIDO2/passkeys are immune to both fatigue and phishing.
Never hardcode credentials in scripts β€” use dynamic secrets from your PAM. Scripts should authenticate to your PAM via a scoped, short-lived service account, or use cloud-native options (AWS Secrets Manager, GCP Secret Manager with Workload Identity). Run Trufflehog or GitLeaks in CI/CD.
Segment your network so contractor VPNs cannot reach all internal shares. Contractors should only access resources their role requires. A contractor VPN with visibility across all internal file shares is a single-hop lateral movement risk.
Alert on anomalous PAM admin logins immediately. PAM is the crown jewels. A PAM admin login from a new device, new IP, or at an unusual time should trigger an immediate alert β€” not a periodic review.
Enrol contractor devices in MDM and require health attestation before VPN admission. Device health checks can catch infostealer infections before they translate into credential theft.
April 2021 – June 2023 (2 year operation) Critical Azure AD / Exchange Online

Storm-0558 – Compromised Engineer β†’ Crash Dump β†’ Stolen MSA Signing Key β†’ Forged Tokens β†’ Government Email Espionage

Chinese nation-state actor Storm-0558 compromised a Microsoft engineer's corporate account, discovered a consumer MSA signing key that had accidentally been included in a crash dump in a debugging environment, and used it to forge authentication tokens. A token validation bug in Exchange Online accepted these consumer tokens as enterprise credentials, enabling access to ~25 organisations' email β€” including 60,000 US State Department emails β€” for weeks before discovery.

~25 organisations breached
60,000 State Dept emails exfiltrated
2 years initial intrusion to exploitation
Threat actor: Storm-0558 (PRC / Chinese SVR equivalent)
πŸ“„ MSRC July 2023 β†— πŸ“„ MSRC key acquisition investigation β†— πŸ“„ Wiz Research analysis β†—
🎯 Initial Compromise β€” Engineer Account
01
Microsoft engineer's corporate account compromised (April 2021)
T1078 – Valid Accounts

Storm-0558 targeted an engineer whose device had been compromised prior to joining Microsoft (likely during a company acquisition). After the engineer joined Microsoft, the attackers used this foothold to access Microsoft's corporate network β€” where they would remain for approximately two years before exploiting the signing key.

Hypothesis: Pre-acquisition device compromise; credentials reused on Microsoft corporate account
Dwell time on Microsoft network: April 2021 – ~June 2023 (2 years)
Log retention gap: Microsoft could not confirm exfiltration due to log retention policy limits
Nation State APTCorporate AccountAcquisition RiskT1078
πŸ”‘ Key Acquisition β€” The Crash Dump
02
Consumer MSA signing key found in crash dump in engineering debug environment
T1552 – Unsecured Credentials

A 2021 system crash in Microsoft's signing infrastructure generated a crash dump that, due to a race condition bug, incorrectly included consumer MSA signing key material that should never leave the isolated signing environment. The dump was copied to a debugging environment accessible to engineering accounts. Storm-0558, using the compromised engineer's account, accessed and exfiltrated the key.

Key leaked: Consumer MSA signing key (2016 vintage, still active)
How it leaked: Race condition bug caused crash dump to include signing key material
How accessed: Crash dump in debug environment β€” accessible via compromised engineer account
Microsoft quote: "Operational errors resulted in key material leaving the secure token signing environment"
MSA Signing KeyCrash DumpRace ConditionDebug Environment
πŸ” Token Forgery β€” The Core Technique
03
Stolen MSA key used to mint forged authentication tokens for government targets
T1606.001 – Web Token Forgery

Starting May 15, 2023, Storm-0558 used the stolen MSA consumer signing key to forge OpenID v2.0 access tokens impersonating specific users at targeted government organisations. The tokens were correctly signed β€” any service validating them against Microsoft's published public keys would accept them as legitimate.

Token type: OpenID v2.0 access tokens (signed with MSA consumer key)
Blast radius (per Wiz): Could forge tokens for any Azure AD app supporting personal account auth β€” not just Exchange
Services potentially at risk: OneDrive, SharePoint, Teams, any app using "Login with Microsoft"
Token ForgeryMSA KeyOpenID v2.0T1606.001
πŸ› Exploitation β€” Token Validation Bug
04
Exchange Online accepted consumer-signed tokens as enterprise credentials (SDK validation bug)
T1212 – Exploitation for Credential Access

Consumer and enterprise signing keys are separate systems and should only be valid for their respective scopes. However, the Exchange Online team had incorrectly assumed the Azure AD SDK validated token issuers by default β€” it didn't. This meant Exchange Online accepted the forged consumer-scoped tokens as valid enterprise credentials. An additional OWA GetAccessTokenForResource API bug let attackers generate fresh Exchange tokens from forged tokens.

The validation bug: Exchange Online assumed Azure AD SDK performed issuer validation β€” it didn't
OWA additional bug: GetAccessTokenForResource API issued fresh tokens from already-issued forged tokens
Result: Consumer MSA token β†’ accepted as enterprise Exchange Online credential
Token Validation BugExchange OnlineOWA APIAzure AD SDK
πŸ“¬ Collection and Exfiltration
05
Email access and exfiltration across ~25 government organisations via OWA REST API
T1114.002 – Remote Email Collection

Using PowerShell and Python scripts against the OWA REST API with forged tokens, Storm-0558 read and exfiltrated email from ~25 organisations including senior US State Department and Commerce Department officials. Access ran for at least 6 weeks before discovery.

Method: OWA Exchange Store REST API calls using forged tokens
State Dept loss: ~60,000 emails including communications of the US Ambassador to China
Other victims: Commerce Secretary Raimondo + senior officials across ~25 organisations
OWA REST APIEmail ExfiltrationGovernment EspionageT1114.002
🚨 Discovery
06
US State Dept detects via custom MailItemsAccessed alert ("Big Yellow Taxi") β€” June 16, 2023

The State Dept detected the breach via a custom alert rule triggered by the MailItemsAccessed audit event β€” which was only available to organisations that had purchased Microsoft's E5 license tier. Organisations on lower tiers could not see this event and were unable to detect the breach independently. Following CISA pressure, Microsoft extended MailItemsAccessed to E3 customers in September 2023.

Detection event: MailItemsAccessed β€” unknown application ID accessing mailboxes
Critical licensing gap: MailItemsAccessed was E5-only at time of breach Β· Most victims couldn't see it
Dwell time: ~6 weeks of confirmed email access; potentially 2 months total
MailItemsAccessedE5 Logging GapCustom Alert6 Week Dwell

πŸ›‘ How to Defend Against This Chain

Enable MailItemsAccessed auditing in Exchange Online β€” now available to E3. Alert on unknown application IDs accessing mailboxes. This was the only control that detected Storm-0558.
Explicitly validate token issuers in your applications β€” don't assume the SDK does it. For Azure AD apps, confirm tokens are signed by the correct key type for your audience. Don't rely on library defaults.
Protect crash dump pipelines β€” scan for key material before copying to non-isolated environments. Crash dumps should be treated as potentially sensitive and scanned for credentials and key material before being moved to debugging environments.
Treat acquired companies' devices as untrusted until fully re-enrolled under your MDM. Devices from acquisitions should go through a full wipe-and-re-enrol process before receiving corporate credentials.
Maintain an approved OAuth application inventory and alert on deviations. Alert on any application accessing mailboxes that is not in your approved inventory. Conditional Access policies can enforce application restrictions.
October 2019 – December 2020 (14 months) Critical Azure AD AWS

SolarWinds – Build System Compromise β†’ SUNBURST Backdoor β†’ On-Prem to Cloud Pivot β†’ Golden SAML β†’ US Government Espionage

Russian SVR (APT29 / Cozy Bear) breached SolarWinds' build pipeline and injected the SUNBURST backdoor into signed Orion software updates sent to 18,000+ customers. At high-value government targets, they used SUNBURST to achieve domain admin on-premises, then stole the ADFS token-signing certificate to forge Golden SAML tokens β€” bypassing MFA entirely to access Azure AD and Microsoft 365 environments for months. This was the first major nation-state supply chain attack that explicitly pivoted from on-premises to cloud identity.

18,000+ orgs received malicious update
~100 actively exploited
14 months from build compromise to discovery
Threat actor: APT29 / Cozy Bear (Russian SVR)
πŸ“„ MITRE Campaign C0024 β†— πŸ“„ CISA remediation guidance β†— πŸ“„ Palo Alto timeline β†—
πŸ— Supply Chain β€” Build System Compromise
01
SolarWinds build environment breached; SUNSPOT implant injected into Orion DLL at compile time (Oct 2019)
T1195.002 – Software Supply Chain

SVR gained access to SolarWinds' internal build system and installed SUNSPOT β€” a build-time implant that monitored the MSBuild.exe process and injected SUNBURST malicious code into Orion.Core.BusinessLayer.dll during compilation. The resulting DLL was then signed with SolarWinds' legitimate code-signing certificate, making it appear authentic.

Implant: SUNSPOT β€” intercepted MSBuild.exe and injected SUNBURST into target DLL
Code signing: Trojanized DLL signed with SolarWinds' legitimate certificate (trusted by customers)
Dormancy: SUNBURST waited ~2 weeks post-installation before activating (to evade sandbox detection)
SUNSPOTBuild TamperingCode Signing AbuseT1195.002
πŸ“‘ Backdoor Activation and C2 Beaconing
02
SUNBURST distributed via Orion updates; C2 beacon to avsvmcloud[.]com via DNS (March 2020)
T1071.004 – DNS C2

From March 2020, trojanized Orion updates were installed by customers. SUNBURST beaconed to the attacker-controlled domain avsvmcloud[.]com using DNS subdomain queries that encoded victim environment information. SVR then selectively activated only high-value targets for further exploitation β€” the majority of the 18,000 infected organisations were never actively exploited.

C2 mechanism: DNS subdomain encoding β€” victim fingerprint data encoded in DNS query subdomains
Evasion: Traffic mimicked legitimate SolarWinds telemetry Β· Dormancy period bypassed sandbox detection
Selective exploitation: 18,000 infected Β· ~100 actively pursued by SVR
SUNBURSTDNS C2avsvmcloud.comT1071.004
πŸ”Ό On-Prem Privilege Escalation
03
TEARDROP dropper deploys Cobalt Strike BEACON; domain admin privileges obtained on-premises
T1078.002 – Domain Accounts

At selected high-value targets, SUNBURST delivered TEARDROP β€” a memory-resident dropper β€” which deployed Cobalt Strike BEACON for interactive C2 and lateral movement. SVR used BEACON to escalate to domain admin privileges on the victim's on-premises Active Directory, positioning themselves to attack cloud identity via the ADFS server.

Second-stage malware: TEARDROP (memory-resident dropper) β†’ Cobalt Strike BEACON (C2)
Goal of on-prem access: Reach ADFS server to steal the SAML token-signing certificate
Evasion: All traffic masqueraded as legitimate SolarWinds API activity
TEARDROPCobalt StrikeDomain AdminLateral Movement
☁️ On-Prem to Cloud Pivot β€” Golden SAML
04
ADFS token-signing certificate stolen; Golden SAML tokens forged for Azure AD / M365 (bypasses MFA)
T1606.002 – SAML Token Forgery T1550 – Use Alternate Auth Material

With domain admin privileges, SVR extracted the ADFS token-signing private key and certificate from the on-premises federation server. Using this key, they could forge SAML assertions impersonating any user β€” "Golden SAML." Forged SAML tokens bypass MFA entirely because the SAML assertion IS the proof of authentication β€” no second factor is requested when a valid SAML response is presented.

Golden SAML attack steps:
1. Extract ADFS private signing key + certificate (requires domain admin)
2. Forge SAML assertion claiming to be any privileged user (Global Admin, etc.)
3. Present to Azure AD / M365 β€” accepted as fully legitimate
4. MFA bypassed β€” the forged SAML IS the authentication proof
Persistence: SAML signing certs rarely rotated β€” access persisted indefinitely without re-exploitation
Golden SAMLADFSMFA BypassT1606.002On-Prem to Cloud
πŸ“¬ Cloud Espionage and Persistence
05
Long-term M365 email access; Azure AD backdoors added to survive Orion removal
T1114.002 – Remote Email Collection T1098 – Account Manipulation

SVR accessed M365 environments at multiple US government agencies including Treasury, Commerce, DHS, State Department, and DOJ. Critically, they also modified Azure AD to add trusted federated identity providers and OAuth application permissions β€” cloud-layer backdoors that persisted even after SolarWinds Orion was removed from victim networks.

Victims: US Treasury, Commerce, DHS, State Dept, DOJ, and ~95 other organisations
Cloud persistence mechanisms added:
β†’ New federated identity providers added to Azure AD
β†’ OAuth app permissions granted for API-based access
β†’ Service principal credentials added for ongoing access
Key lesson: Removing Orion did NOT remove cloud access β€” Azure AD had to be separately evicted
M365 Email AccessAzure AD PersistenceOAuth BackdoorsUS Government
🚨 Discovery
06
FireEye discovers its own red team tools stolen; traces back to trojanized Orion β€” December 13, 2020

FireEye discovered theft of its proprietary red team offensive tools during an internal investigation and traced the intrusion to a trojanized SolarWinds Orion update. Their public disclosure on December 13, 2020 triggered a global incident response and CISA Emergency Directive 21-01 requiring all federal agencies to immediately disconnect Orion. Crucially, removing Orion did not remove cloud persistence β€” Azure AD backdoors required a separate, comprehensive eviction process.

Discovered by: FireEye (investigating their own breach, Dec 13 2020)
Time from build compromise to discovery: ~14 months
CISA ED 21-01: All federal agencies ordered to disconnect SolarWinds Orion immediately
Critical complication: Cloud-layer backdoors (Azure AD federation, OAuth apps) persisted after Orion removal
14 Month DwellFireEye DiscoveryCISA ED 21-01Cloud Persistence Remained

πŸ›‘ How to Defend Against This Chain

Treat your ADFS / identity provider as a Tier 0 asset equal to domain controllers. The ADFS server holds the keys to all federated services. Protect it with privileged access workstations, no internet exposure, and HSM-protected signing keys. Monitor it as you would your most critical production system.
Detect Golden SAML by correlating ADFS event logs with Azure AD sign-in logs. Legitimate ADFS authentications leave traces in both systems. An Azure AD sign-in with no corresponding ADFS authentication event (IDs 1202, 1200) is highly suspicious.
Audit Azure AD federated identity providers, OAuth app permissions, and service principals regularly. SVR's cloud persistence survived Orion removal. Use Microsoft Entra audit logs or Defender for Cloud Apps to detect new high-privilege applications.
Implement software supply chain integrity verification for build systems. Monitor build environments with endpoint security. Verify build artifact integrity. Implement reproducible builds. Treat your CI/CD pipeline as production infrastructure.
Plan for cloud-specific eviction as a separate step from on-prem remediation. Any on-prem compromise may have resulted in cloud-layer backdoors. Azure AD, OAuth app permissions, and federated IdPs must be independently reviewed and evicted.
July 2020 – June 2023 (3-year exposure) Critical Azure Blob Storage

Microsoft AI Research SAS Token β€” Over-Permissioned Token β†’ Public GitHub β†’ 38TB Internal Data Exposed for 3 Years

A Microsoft AI researcher shared a URL to open-source training data on a public GitHub repository. The URL contained an Azure Shared Access Signature token β€” but instead of being scoped to a specific file or container, it was an Account SAS with full-control permissions to the entire storage account, set to expire in 2051. Anyone who found the URL could read, modify, or delete 38TB of internal Microsoft data including employee workstation backups, private keys, saved passwords, and 30,000+ internal Teams messages. Discovered and responsibly disclosed by Wiz Research in June 2023 after ~3 years of exposure.

38 TB exposed
3 years exposure window
Full control permissions (read + write + delete)
Discovered by: Wiz Research (responsible disclosure)
πŸ“„ Wiz Research disclosure β†— πŸ“„ BleepingComputer β†—
πŸ”§ Root Cause β€” Misconfigured SAS Token
01
Researcher creates Account SAS with full-control permissions, 30-year expiry
T1098.004 – SSH Authorized Keys (analogous: overpermissioned access token)

When sharing open-source AI training data publicly, the researcher used Azure's SAS token feature but chose the broadest option β€” an Account SAS β€” rather than a narrowly-scoped Service SAS. They set permissions to "full control" (read, write, delete) and the expiry to October 2051. Azure does not audit SAS token generation, making this invisible to administrators.

SAS type: Account SAS (entire storage account) β€” should have been Service SAS (single container)
Permissions set: Full control β€” read, write, delete, list everything
Expiry set: October 6, 2051 (30+ years)
Azure's own warning: "Not possible to audit generation of SAS tokens" β€” no admin visibility
Account SASFull Control30-Year ExpiryMisconfiguration
πŸ“’ Exposure β€” Committed to Public GitHub
02
Full SAS token URL committed to public GitHub repository README (July 20, 2020)
T1552.004 – Private Keys

The researcher committed the complete SAS token URL to the public GitHub repository "robust-models-transfer" as download instructions. GitHub's secret scanning did not cover Account SAS token patterns at the time. The URL was publicly visible for nearly 3 years. In October 2021, the token was renewed β€” with the expiry extended to October 2051.

Repository: github.com/microsoft/robust-models-transfer (public)
Exposed from: July 20, 2020 to June 24, 2023 (2 years 11 months)
Token renewed: October 2021 β€” expiry extended to 2051 (30 more years)
Scanning gap: GitHub secret scanning did not cover Account SAS tokens until after this disclosure
GitHubPublic RepositorySecret in READMET1552.004
πŸ“‚ Data Accessible β€” 38TB Including Credentials
03
Full storage account accessible β€” 38TB including employee backups, keys, Teams messages
T1530 – Data from Cloud Storage

Anyone with the URL had full access to an internal Azure Blob storage account β€” not just the intended training data folder. The account contained disk backups of two Microsoft employees' workstations with saved passwords, private keys, and an archive of 30,000+ internal Microsoft Teams messages. Full-control permissions also meant a malicious actor could have injected code into AI model files, creating a supply chain attack vector.

Exposed data:
β†’ Disk backups of 2 employee workstations (passwords, private keys, personal data)
β†’ 30,000+ Microsoft Teams messages from 359 employees
β†’ Internal credentials and secret keys
β†’ Intended open-source AI training data
Supply chain risk: Write access meant an attacker could have injected malicious code into AI model files
38TBEmployee CredsTeams MessagesAI Supply Chain RiskT1530
πŸ”¬ Discovery β€” Wiz Research Internet Scan
04
Wiz Research discovers token while scanning GitHub for misconfigured cloud storage β€” June 22, 2023

Wiz Research runs an ongoing project scanning the internet and public repositories for misconfigured cloud storage. While reviewing Microsoft's public AI GitHub repositories, they found the SAS token URL, followed it, and discovered the full scope of exposure. They reported to Microsoft MSRC on June 22; the token was revoked on June 24, 2023 β€” 2 days later. Coordinated public disclosure followed on September 18, 2023.

Discovered by: Wiz Research (scanning public GitHub repos for cloud misconfigurations)
Reported: June 22, 2023 | Token revoked: June 24, 2023 (48 hours)
GitHub URL updated: July 7, 2023 | Public disclosure: September 18, 2023
No evidence: Microsoft found no evidence of malicious exfiltration beyond Wiz's research
Responsible DisclosureWiz Research3 Year ExposureNo Malicious Exfil Confirmed

πŸ›‘ How to Defend Against This Chain

Never use Account SAS for external sharing β€” always use Service SAS with a Stored Access Policy. Service SAS scopes access to a single container. A Stored Access Policy allows central management and instant revocation without rotating the account key.
Configure and enforce SAS expiration policies at the Azure storage account level. Azure allows you to set a maximum SAS token lifetime. A 30-year token should never be possible. Set limits (e.g., 24 hours for external sharing) and alert on violations.
Run secret scanning across all repositories including SAS token patterns. Trufflehog, GitLeaks, and GitHub Advanced Security can detect SAS tokens in code. Microsoft added Account SAS token patterns to GitHub's secret scanning service following this disclosure.
Separate internal data from public data at the storage account boundary. Open-source training data should live in a dedicated storage account with no internal data co-located. Misconfiguration then limits the blast radius to the public data account only.
Monitor SAS-authenticated access via Azure Monitor and Storage Analytics logs. Alert on access from unexpected IPs or at unusual times for storage accounts holding sensitive data. Enable SAS token expiration policies to catch long-lived tokens automatically.
1993–1995 (fugitive period) Critical On-Premises / Dial-Up

Kevin Mitnick / Novell – OSINT β†’ Pretexting β†’ Phone Social Engineering β†’ Dial-Up Access β†’ NetWare Source Code Theft

While a fugitive living under a false identity in Denver, Kevin Mitnick β€” the FBI's most wanted hacker β€” targeted Novell's technical support staff using a technique he called pretexting. By impersonating a Novell employee using authentic corporate lingo, internal knowledge, and manufactured urgency, he convinced support staff to provide credentials and system access. He then used dial-up connections to extract proprietary NetWare source code. Shawn Nunley, a Novell support analyst at the time, was directly targeted by Mitnick and later became the FBI's star witness β€” before becoming one of Mitnick's closest friends. This entry is notable as a foundational case study in social engineering before the term existed in mainstream security.

NetWare source code stolen
2.5 years as fugitive targeting multiple companies
25 counts in federal indictment
Threat actor: Kevin Mitnick ("Condor"), FBI's Most Wanted
πŸ“„ Wired β€” Mitnick Meets His Pigeon (Shawn Nunley) β†— πŸ“„ Federal indictment details β†— πŸ“„ Malicious Life β€” Mitnick Part 2 β†—
ℹ️ Shawn Nunley (CSOH founder) was Mitnick's target at Novell. For more on their remarkable story, read the Kevin Mitnick β€” In Memoriam tribute on this site.
πŸ” Reconnaissance β€” Open Source Intelligence
01
Mitnick researches Novell's internal org structure, employee names, and technical lingo
T1591 – Gather Victim Org Information T1589 – Gather Victim Identity Info

Before making a single call, Mitnick invested significant time learning everything publicly available about his target. He gathered employee names from directory listings, understood Novell's internal team structures, and immersed himself in NetWare technical documentation so he could speak fluently about the product β€” a prerequisite for any convincing pretext. As he wrote in The Art of Deception: "When you know the lingo and terminology, it establishes credibility β€” you're legit, a coworker slogging in the trenches just like your targets."

Sources used: Phone directories, technical manuals, product documentation, prior calls to gather names
Goal: Build enough authentic detail to withstand scrutiny from a real Novell employee
Mitnick's method: "Pretext calls" β€” low-stakes calls to gather information for higher-stakes calls later
OSINTPretexting PrepT1591Phone Phreaking
πŸ“ž Initial Contact β€” The Pretext Call
02
Mitnick calls Novell technical support impersonating an internal employee
T1566.004 – Phishing: Voice T1656 – Impersonation

Mitnick called Novell's technical support line β€” the same line customers and employees used β€” and presented himself as a legitimate Novell employee or developer with a plausible reason for needing help. He used real employee names, correct internal terminology, and manufactured urgency to make the call feel routine. Shawn Nunley, a support analyst, took the call.

Impersonation type: Internal Novell employee / developer
Technique used: Pretexting β€” a fully constructed scenario with backstory, urgency, and technical credibility
Location: Mitnick was calling from Denver, living as "Eric Weiss" under a fabricated identity
VishingImpersonationPretextingT1566.004T1656
🎭 Trust Building β€” The Human Exploit
03
Mitnick establishes rapport and credibility through technical knowledge and urgency
T1656 – Impersonation

Mitnick's genius was not technical β€” it was psychological. He assessed his target's willingness to cooperate in the first few seconds, adapting his approach in real time. He used Novell-specific technical language that only an insider would know, referenced real internal projects or colleagues, and framed his request as urgent but routine β€” something that needed to be resolved quickly to avoid a bigger problem. This is the core of social engineering: making the target feel that compliance is the safe, helpful, professional response.

Psychological levers used: Authority (internal employee), urgency (time pressure), likeability (charm), reciprocity (asking a reasonable favour)
Mitnick on reading targets: "I'm always on the watch for signs that give me a read on how cooperative a person is"
Why support staff were vulnerable: Helping people quickly was their job β€” suspicion felt like being unhelpful
Social EngineeringPretextingAuthority BiasUrgencyHuman Exploit
πŸ”‘ Credential Access β€” Information Elicitation
04
Mitnick elicits credentials, dial-up numbers, or system access details from support staff
T1589.001 – Credentials T1598 – Phishing for Information

Once trust was established, Mitnick steered the conversation toward his actual goal β€” obtaining credentials, a dial-up number, or system access that would let him connect to Novell's internal network remotely. The request was framed as something mundane: a password reset, a need for a dial-in number to work remotely, or a request to verify account details. The target had no reason to suspect anything unusual.

Documented outcome: Mitnick obtained access credentials or dial-up access to Novell internal systems
Federal indictment: Mitnick and DePayne "stole and copied proprietary computer software from Novell" including NetWare source code
Credential ElicitationDial-Up AccessT1589.001T1598
πŸ’» System Access β€” Dial-Up Intrusion
05
Mitnick dials into Novell's network using obtained credentials β€” from a cloned cell phone
T1078 – Valid Accounts T1036 – Masquerading

Using the credentials or dial-up access obtained from the call, Mitnick connected to Novell's internal systems remotely from his Denver apartment β€” at night, while working a day job at a law firm under a false identity. To hide his location from both the FBI and the phone company, he used cloned cellular phones, cycling through cloned numbers to avoid detection through call records.

Connection method: Dial-up modem (pre-internet era remote access)
Credentials used: Obtained via social engineering call to support staff
Location obfuscation: Cloned cellular phones β€” using stolen ESN/MIN pairs to masquerade as other subscribers
When: Nights, while working as "Eric Weiss" at a Denver law firm during the day
Dial-UpCloned Cell PhoneValid CredentialsT1078False Identity
πŸ“€ Exfiltration β€” NetWare Source Code
06

With authenticated access to Novell's internal systems, Mitnick copied proprietary NetWare source code β€” some of the most valuable intellectual property the company owned. The federal indictment confirmed that Mitnick and co-conspirator Lewis DePayne stole and copied this software. Mitnick's motivation, as he repeatedly stated, was not financial β€” it was intellectual curiosity and the challenge of accessing systems that were supposed to be inaccessible.

Data stolen: Proprietary Novell NetWare source code (confirmed in 25-count federal indictment)
Co-conspirator: Lewis DePayne (charged alongside Mitnick)
Motivation: Intellectual curiosity β€” Mitnick: "simple crimes of trespass... I wanted to know how these systems worked"
No financial use: No evidence source code was ever sold or used commercially
Source Code TheftNetWareT1048Intellectual PropertyNo Financial Motive
🚨 Discovery and Aftermath
07
FBI investigation β€” Shawn Nunley becomes star witness, then Mitnick's closest friend

The FBI built their case against Mitnick in part through witness testimony from support staff he had targeted. Shawn Nunley, who had taken Mitnick's call at Novell, became the government's star witness. But the story didn't end there β€” Shawn grew disillusioned with the government's handling of the case, contacted Mitnick's defence team, and ultimately became one of Mitnick's dearest friends. It's one of the most extraordinary victim-to-friend trajectories in the history of computer crime.

Arrest: February 15, 1995 β€” Raleigh, North Carolina apartment
Found with: Cloned cellular phones, 100+ cloned phone codes, multiple pieces of false identification
Sentence: 46 months + 22 months for supervised release violation (5 years total, including 8 months solitary)
Shawn Nunley: FBI star witness β†’ disillusioned with prosecution β†’ contacted defence β†’ lifelong friend of Mitnick
FBI Arrest 1995Star WitnessFalse Identity UnravelledCloned Phones

πŸ›‘ How to Defend Against This Chain

Implement a call-back verification procedure for any credential or access request by phone. Never provide passwords, dial-up numbers, or system access to an inbound caller β€” regardless of how convincing they sound. Hang up and call back on a number you independently verify from your internal directory.
Train support staff to recognise the three pressure levers: authority, urgency, and likeability. Mitnick used all three in every call. When someone is very charming, very knowledgeable, and very urgent all at once β€” that combination itself is a red flag. Slow down, verify, never let urgency override procedure.
Restrict what information support staff can provide and to whom. Credentials, dial-up numbers, and system access details should never be distributed by phone without a formal verification workflow. The support desk should have a written procedure and authority to refuse without penalty.
Monitor dial-up and remote access connections for unusual times or locations. Mitnick connected at night from Denver. Anomalous remote access β€” unusual hours, unknown caller ID, high volume of data transferred β€” should trigger a review.
Security awareness training is not optional β€” it is the primary control against social engineering. Technical controls stopped none of Mitnick's Novell attack. The only defence was a human one. Regular training that uses realistic scenarios β€” not just policy documents β€” is the difference between a staff member who pauses and verifies and one who helps an attacker.
This attack still works today. Vishing (voice phishing) remains one of the top two attack vectors in 2024. The tools have changed β€” attackers now use AI voice cloning, LinkedIn for OSINT, and SMS as a follow-up β€” but the psychology is identical to what Mitnick did in 1994. The defence is also identical: verify independently, never let urgency override process.
September 2023 Critical Okta Azure AD

Scattered Spider / MGM Resorts – LinkedIn OSINT β†’ Vishing Help Desk β†’ Okta Super Admin β†’ Azure AD β†’ 100 ESXi Servers Encrypted

Scattered Spider (UNC3944) compromised MGM Resorts International in September 2023 using a single 10-minute phone call to the IT help desk. Attackers researched an MGM employee on LinkedIn, impersonated them to a help desk agent, obtained an MFA reset, and gained initial access. From there they escalated to Okta Super Administrator, claimed Azure AD tenant-level control, moved laterally across the network, and encrypted over 100 ESXi hypervisors using ALPHV/BlackCat ransomware β€” causing $100M in losses and a 10-day outage. The entire initial access chain required no technical exploit whatsoever.

$100M+ estimated losses
10 days operational disruption
100+ ESXi hypervisors encrypted
Threat actor: Scattered Spider (UNC3944) + ALPHV/BlackCat RaaS
πŸ“„ FBI/CISA Joint Advisory AA23-320A β†— πŸ“„ MITRE ATT&CK G1015 β†— πŸ“„ GuidePoint GRIT Analysis β†— πŸ“„ CrowdStrike β€” Not a SIMulation β†—
πŸ” Reconnaissance β€” LinkedIn OSINT
01
Attacker researches MGM employee identity on LinkedIn to build convincing pretext
T1591 – Gather Victim Org Info T1589.002 – Email Addresses

Before making any call, the attacker used LinkedIn to identify an MGM Resorts employee β€” gathering their full name, job title, and enough personal and professional detail to convincingly impersonate them to an IT help desk agent. Mandiant confirmed from forensic recordings of these call center attacks that the threat actors already possessed PII on their victims before calling β€” including SSN last four digits, dates of birth, and manager names β€” to pass standard help desk identity verification. Scattered Spider are native English speakers, removing any accent barrier that typically flags social engineering attempts from non-Western threat actors.

Primary OSINT source: LinkedIn β€” full name, job title, department, manager name
PII used to pass verification (Mandiant confirmed): Last 4 digits of SSN, date of birth, manager name and job title
Why it worked: Help desks are trained to be helpful β€” suspicion of an "employee" feels obstructive
Mandiant: "The level of sophistication in these social engineering attacks is evident in both the extensive research performed on potential victims and the high success rate"
OSINTLinkedInT1591Native English Speaker
πŸ“ž Initial Access β€” Vishing the Help Desk
02
Single phone call to MGM IT help desk β€” attacker impersonates employee and requests MFA reset
T1566.004 – Phishing: Voice (Vishing) T1656 – Impersonation

The attacker called MGM's IT help desk, impersonated the employee identified on LinkedIn, and requested a multi-factor authentication reset. Mandiant confirmed from forensic recordings that the consistent pretext used was claiming to be receiving a new phone β€” a routine scenario that naturally requires an MFA reset. The agent had no way to verify the caller's true identity beyond the PII provided, which matched what the attacker had gathered. The call lasted approximately 10 minutes.

Attack vector: Phone call (vishing) β€” zero technical skill required for this step
Pretext used (Mandiant confirmed): "I'm receiving a new phone and need my MFA reset" β€” a routine, unsuspicious request
Verification bypassed with: SSN last 4 digits, date of birth, manager name β€” all pre-researched
Verification failure: Help desk had no phishing-resistant out-of-band identity verification
ALPHV statement: "All SCATTERED SPIDER did to get into MGM was hop on LinkedIn, find an employee, then call the help desk"
VishingMFA ResetNew Phone PretextHelp Desk AbuseT1566.004T1656
πŸ” Internal Reconnaissance β€” SharePoint Documentation Mining
03
Internal SharePoint searched for VPN, VDI, and remote access documentation
T1213.002 – Sharepoint T1046 – Network Service Discovery

With initial account access, the attacker's first move was not to escalate immediately β€” it was to read. Mandiant confirmed that UNC3944 consistently searched victims' internal SharePoint sites for help guides and documentation covering VPNs, virtual desktop infrastructure (VDI), and remote telework utilities. This gave them a roadmap of the environment drawn entirely from the victim's own internal documentation, dramatically accelerating lateral movement planning without triggering any security tooling.

Platform searched: Microsoft SharePoint β€” internal intranet and documentation portal
Content targeted (Mandiant confirmed): VPN setup guides, VDI connection instructions, remote telework utilities documentation
Why effective: Internal IT docs contain exactly the information an attacker needs β€” network topology, tool names, access paths
Detection gap: SharePoint search activity by a recently-reset account is virtually indistinguishable from legitimate onboarding
SharePointInternal ReconVPN DocsVDI DocsT1213.002
πŸ”‘ Credential & Identity Access β€” Okta Super Admin
04
Okta Super Administrator access obtained via compromised account β€” MFA removed from admin accounts
T1078.004 – Cloud Accounts T1098 – Account Manipulation

With initial account access, the attacker escalated to Okta Super Administrator. Mandiant additionally confirmed a technique not widely reported: UNC3944 used Okta's self-assignment feature to assign the compromised account to every application in the Okta instance β€” giving them SSO access to every federated application simultaneously, and a visual inventory of every app tile available in the Okta portal. They also configured a second Identity Provider as an impersonation app and stripped MFA from targeted admin accounts.

Platform abused: Okta β€” identity provider for MGM's entire enterprise application estate
Privilege achieved: Super Administrator β€” full control over all identity for downstream applications
Mandiant confirmed technique: Okta self-assignment to every app in the instance β€” instant access to all SSO-protected applications
IdP abuse: Second Identity Provider configured as "impersonation app" β€” could act as any user in the org
MFA stripped: Second-factor requirements removed from authentication policies for targeted accounts
Okta Super AdminSelf-Assignment All AppsIdP ImpersonationMFA StrippedT1078.004T1098
☁️ Cloud Privilege Escalation β€” Azure AD + New VM Persistence
05
Azure AD super administrator access claimed β€” new virtual machines created in vSphere/Azure for persistent foothold
T1078.004 – Cloud Accounts T1578.002 – Create Cloud Instance

Having compromised Okta, the attacker pivoted to MGM's Azure AD tenant and claimed super administrator privileges including Tenant Root Group management permissions. Mandiant additionally confirmed a persistence technique specific to this group: UNC3944 accessed vSphere and Azure through SSO applications to create entirely new virtual machines, from which all follow-on activities were conducted. These attacker-controlled VMs had Microsoft Defender and Windows telemetry disabled, making forensic investigation significantly harder.

Cloud platform: Microsoft Azure AD β€” highest possible tenant permissions claimed
Mandiant confirmed persistence: New VMs created in vSphere and Azure via SSO β€” used as clean base for all further activity
VM hardening by attacker: MAS_AIO and privacy-script.bat used to remove Microsoft Defender and Windows telemetry
PCUnlocker ISO: Attached to existing VMs via vCenter to reset local admin passwords, bypassing domain controls
Impact: Cloud activity sourced from inside the environment β€” malicious traffic indistinguishable from legitimate traffic
Azure AD Tenant RootNew VM PersistenceDefender DisabledPCUnlockerT1078.004T1578.002
πŸ•΅οΈ Lateral Movement & Persistence β€” LOTL + SaaS Abuse
06
Living-off-the-land lateral movement β€” RDP, CrowdStrike RTR abuse, Mimikatz, IMPACKET, multiple tunnelling tools
T1021.001 – Remote Desktop Protocol T1562 – Impair Defenses

With domain-level cloud access, the attacker moved laterally using legitimate tools already present in the environment. Mandiant confirmed several techniques not widely reported: UNC3944 created API keys inside CrowdStrike's external console to run commands (whoami, quser) via the Real Time Response module β€” effectively using the victim's own EDR as a remote access tool. They also used Mimikatz, ADRecon, and IMPACKET from attacker-controlled VMs, along with multiple tunnelling tools for persistent C2.

CrowdStrike RTR abuse (Mandiant confirmed): API keys created in CrowdStrike Falcon console β€” RTR module used to run whoami and quser
Credential theft: Mimikatz, "SecretServerSecretStealer" PowerShell script, ADRecon
Tunnelling tools (Mandiant confirmed): NGROK, RSOCX, Localtonet, Tailscale, Remmina
Python libraries: IMPACKET installed on attacker VMs
EDR evasion: BYOVD β€” CVE-2015-2291 Intel driver used to disable endpoint security agents
SaaS accessed (Mandiant confirmed): vCenter, CyberArk, Salesforce, Azure, CrowdStrike, AWS, GCP β€” all via Okta SSO
CrowdStrike RTR AbuseMimikatzIMPACKETNGROKRSOCXBYOVDT1021.001T1562
πŸ“€ Exfiltration β€” Data Theft Before Encryption
07
Sensitive data exfiltrated ahead of ransomware deployment β€” double extortion strategy
T1657 – Financial Theft / Extortion T1530 – Data from Cloud Storage

Before deploying ransomware, the attacker exfiltrated sensitive data from MGM's environment β€” establishing the leverage needed for double extortion. They threatened to publish the stolen data unless the ransom was paid, independent of whether MGM could recover from encryption using backups. Caesars Entertainment, hit in a similar attack at the same time, paid approximately $15 million ransom to prevent data publication.

Strategy: Double extortion β€” encrypt AND threaten to leak, maximising pressure
Caesars parallel: Caesars paid ~$15M ransom; MGM refused and incurred ~$100M in losses instead
ALPHV statement: Claimed to still have access to MGM infrastructure and threatened further attacks
Data targeted: Customer PII, loyalty programme data, internal credentials
Exfil method: Legitimate cloud storage and remote access tools β€” no custom malware required
Double ExtortionData TheftT1657T1530Ransomware-as-a-Service
πŸ’₯ Impact β€” 100+ ESXi Hypervisors Encrypted
08
ALPHV/BlackCat ransomware deployed against 100+ ESXi hypervisors β€” 10-day outage
T1486 – Data Encrypted for Impact T1490 – Inhibit System Recovery

On September 11, 2023 β€” after MGM failed to respond to the attacker's contact attempts β€” ALPHV/BlackCat ransomware was deployed against over 100 ESXi hypervisors across MGM's Las Vegas properties. The rapid encryption of 100+ VMware ESXi servers caused a 36+ hour initial outage and disrupted casino floor operations, hotel check-ins, digital room keys, ATMs, and slot machines for 10 days across multiple Las Vegas properties. MGM refused to pay the ransom.

Ransomware: ALPHV/BlackCat β€” deployed via RaaS affiliate relationship with Scattered Spider
Targets: 100+ VMware ESXi hypervisors running MGM's production VMs
Timeline: Deployed Sept 11, 2023 β€” after MGM ignored attacker contact attempts for 24hrs
Impact: Casino floors, hotel check-ins, digital room keys, ATMs, slot machines β€” all disrupted
Financial impact: ~$100M losses + $45M class-action lawsuit settlement
MGM decision: Refused to pay ransom β€” incurred full remediation cost instead
ALPHV/BlackCatESXi Encryption100+ Hypervisors$100M LossT1486T1490

πŸ›‘ How to Defend Against This Chain

Implement phishing-resistant MFA (FIDO2/passkeys) and never allow help desk agents to reset MFA via phone. This single control breaks the entire initial access chain. The attack required no technical exploit β€” it required one help desk agent following standard procedure. Move MFA resets to an out-of-band workflow requiring manager approval and visual identity verification (video call with government ID).
Treat Okta as a Tier 0 asset β€” the same way you treat Active Directory. Super Administrator access in Okta gives an attacker control over every application federated to it. Require hardware security keys for all Okta admin access, enable Okta ThreatInsight, and alert immediately on new Identity Provider configuration or authenticator resets for admin accounts.
Monitor and alert on Okta Org2Org federation changes and new IdP configurations. The attacker configured a second Identity Provider to impersonate any user in the organisation. New IdP additions should require change control approval and trigger immediate SOC review β€” they are almost never legitimate outside of planned migrations.
Segment your ESXi environment from the corporate identity plane. ESXi hypervisors should not be reachable from identities that live in the same plane as corporate Okta and Azure AD. Separate management networks, dedicated credentials not linked to SSO, and jump hosts with hardware MFA are the minimum bar for hypervisor access.
Detect LOTL tools in unusual contexts. ngrok, Tailscale, and Remmina are legitimate tools β€” but their presence on server infrastructure or in a SOC alert at 2am is not. Build detection rules for tunnelling tools on non-developer endpoints and alert on new VPN mesh clients enrolled outside your MDM.
Have an offline, immutable backup of ESXi VM configurations and snapshots. The rapid encryption of 100+ hypervisors succeeded partly because MGM lacked good backup and restoration practices. Offline backups that can't be reached via domain credentials are the last line of defence when ransomware hits the hypervisor layer.
Run tabletop exercises specifically for the "help desk social engineering β†’ Okta β†’ cloud" scenario. This is now the most documented and replicated attack chain in enterprise security. CISA and FBI have both published guidance on it. If your IR plan doesn't include a playbook for identity provider compromise as an initial step, update it today.

// Know a breach with a detailed post-mortem?

This is a community resource. Submit a PR to add a new kill chain β€” include MITRE technique IDs and link to primary sources.

β†’ Contribute on GitHub