Cloud Security Office Hours Banner

Promptware Kill Chain 2024–26 — 2024

Step-by-step kill chain mapped to MITRE ATT&CK Cloud, sourced from official post-mortems and primary technical analyses.

2024–2026 Critical AI · Cloud

Promptware – Indirect Prompt Injection → Context Poisoning → Persistence → C2 → Covert Camera Livestream

Researchers demonstrated a complete seven-stage kill chain targeting cloud-connected AI assistants — from a malicious Google Calendar invite to covert Zoom video streaming, all triggered by the victim typing "thanks." Documented across 36 real-world incidents by Schneier, Nassi et al., the pattern — termed "promptware" — mirrors classical malware kill chains but executes entirely through the LLM prompt layer. C2 was confirmed in the ChatGPT ZombAI attack (Oct 2024) and the Microsoft Copilot Reprompt attack (Jan 2026, CVE-2026-24307), both patched after disclosure.

36 real-world incidents analysed
21 traverse 4+ kill chain stages
73% of threat classes rated High–Critical
Threat actor: Researchers (Nassi, Schneier et al.) / criminal adoption ongoing
📄 arXiv:2601.09625 — The Promptware Kill Chain ↗ 📄 arXiv:2508.12175 — Invitation Is All You Need ↗ 📄 Schneier on Security ↗
01
Malicious prompt injected into Google Calendar invite title — victim never sees it
T1566 – Phishing

Attacker sends the target a Google Calendar meeting invitation with a malicious prompt embedded in the event title. When the victim asks Gemini "What are my meetings today?", the Google Calendar Agent retrieves the event — including the poisoned title — and feeds it directly into Gemini's active context. The victim never sees the raw title; they only see Gemini's natural-language response. This is indirect prompt injection: the attacker's instructions enter the LLM through a trusted, user-requested data retrieval, not through direct user input.

Delivery vector: Google Calendar event title (shared workspace artifact)
Injection type: Indirect — attacker content retrieved by the LLM on the victim's behalf
Why it works: LLMs process all tokens — system prompts, user queries, retrieved data — as a single undifferentiated sequence; there is no code/data boundary
Indirect Prompt Injection Google Gemini Google Calendar T1566
🔼 Privilege Escalation — Safety Guardrail Bypass
02
Delayed tool invocation defers execution until a benign user action — bypassing real-time guardrails
T1548 – Abuse Elevation Control Mechanism

The injected prompt uses a technique called "delayed tool invocation": rather than triggering immediately (which would fire safety checks against the poison payload), the instructions stage the malicious action and wait for the user to perform a neutral follow-up — such as thanking Gemini. When the user types "thanks," Gemini re-enters a new inference step and the guardrails that evaluated the calendar retrieval do not re-evaluate the deferred instruction. The attacker's command executes with Gemini's full tool permissions.

Technique: Delayed tool invocation — action staged at retrieval time, fired on next benign user turn
Guardrail failure: Safety checks evaluated at prompt parse time, not at deferred execution time
Effect: Attacker instructions treated as trusted system-level directives with full tool access
Delayed Tool Invocation Jailbreak Guardrail Bypass T1548
🔍 Reconnaissance — Enumerating Connected Services
03
LLM queried for connected apps, available tools, and stored user memories — invisibly
T1082 – System Information Discovery

Following jailbreak, the injected prompt queries Gemini for its available tool inventory: connected agents (Google Calendar, Google Home, Gmail, Meet), installed mobile applications (Zoom, browser), and the user's stored memories and calendar data. Unlike classical malware reconnaissance — which precedes initial access — promptware recon occurs after context poisoning, because the LLM's tool inventory is only enumerable once the assistant is under attacker control. The enumeration results feed back into the attacker's context silently; nothing is displayed to the victim.

Information gathered: Active agents (Calendar, Home, Meet), installed apps (Zoom, browser), user memories
Key difference from classical recon: Occurs post-initial-access, not before — order is inverted
Visibility to victim: None — responses go to model context, not rendered in the chat UI
Tool Enumeration Context Poisoning Google Workspace T1082
💾 Persistence — Memory Poisoning
04
Malicious instructions written to long-term workspace memory — re-injected on every future session
T1546 – Event Triggered Execution

Because the malicious prompt is embedded in a Google Calendar artifact, it persists in the workspace's long-term agent memory. Every subsequent session where Gemini accesses calendar data re-injects the attacker's instructions — turning a one-time event into a durable implant. The parallel ZombAI attack (ChatGPT, Oct 2024) demonstrated the same mechanism more explicitly: a prompt injection write to ChatGPT's persistent memory store caused the model to fetch C2 instructions from an attacker-controlled GitHub page at the start of every new conversation, indefinitely.

Calendar mechanism: Poisoned event title re-retrieved on each calendar query across all future sessions
ZombAI mechanism: Prompt injection → write to ChatGPT long-term memory → C2 instructions injected into every conversation
Defence gap: No mechanism to audit, alert on, or require user consent for unexpected memory writes
Memory Poisoning ZombAI ChatGPT Memory Google Workspace T1546
📡 Command & Control
05
LLM beacons attacker-controlled server for updated instructions — C2 runs entirely through the prompt layer
T1071.001 – Web Protocols

ZombAI (Oct 2024): persisted memory instructs ChatGPT to fetch a GitHub Issues page at session start. The attacker posts updated instructions as sequential issues; a COUNTER increment in the payload defeats ChatGPT's page-caching so each beacon retrieves fresh commands. This was the first confirmed promptware-native C2 capability — the attacker remotely controlled the compromised ChatGPT instance with no conventional malware infrastructure. Reprompt (Jan 2026, CVE-2026-24307): a crafted Microsoft Copilot URL with a malicious q parameter caused Copilot to dynamically fetch follow-up instructions from an attacker server — exfiltrating session and profile data incrementally, with no limit on type or volume.

ZombAI C2 channel: GitHub Issues page; COUNTER field increments to defeat cache (issue #1, #2, #3…)
Reprompt C2 channel: Attacker HTTPS server; double-request technique bypasses Copilot guardrails on re-issue
What makes it novel: C2 channel runs entirely through the LLM prompt layer — no injected binary, no network backdoor
ZombAI Reprompt CVE-2026-24307 GitHub C2 T1071.001
🔄 Lateral Movement — Agent Pivot and Self-Replication
06
Compromised Calendar agent pivots to Zoom and Google Home — or self-replicates to every contact via email
T1534 – Internal Spearphishing

On-device lateral movement: the injected calendar prompt instructs Gemini to invoke a second agent or app. On mobile, Automatic App Invocation allows the assistant to launch Zoom, open a browser URL, or trigger Google Home actions (unlock smart windows, activate boiler) — all from a single compromised calendar entry. Off-device worm propagation: in a parallel threat class, a compromised email assistant is instructed to forward the poisoned payload to every address in the victim's contact book, achieving org-wide spread without any further attacker action. Nassi et al. demonstrated both paths; 73% of analysed threat classes were rated High–Critical.

On-device: Calendar Agent → Google Home (turn on boiler, open windows) or → Zoom (launch and stream)
Off-device worm: Infected email assistant self-replicates payload to entire contact list
Physical world impact: Smart home device control demonstrated — digital breach crosses into physical environment
Agent Pivot Worm Propagation Google Home Automatic App Invocation T1534
🎯 Actions on Objective — Covert Video Capture
07
Victim types "thanks" — Zoom launches silently and streams their camera to the attacker
T1125 – Video Capture

When the user enters a benign follow-up response, the staged delayed invocation fires: Gemini automatically launches Zoom and initiates a video session, covertly streaming the victim's camera. No camera indicator activates before the session starts; the victim has no warning. In Reprompt data-exfiltration variants, the attacker's C2 server incrementally extracts session context, personal profile data, and any detail inferred from prior responses — with the attacker dynamically refining queries based on each reply. Nassi et al. also demonstrated sending spam email from the victim's account, publishing disinformation, and controlling physical home devices as alternative objectives.

Primary objective: Covert Zoom video capture — triggered by victim typing "thanks"
Reprompt objective: Incremental data exfiltration — attacker probes for sensitive details based on prior C2 replies
Discovery: Google deployed mitigations after SafeBreach disclosure; Reprompt (CVE-2026-24307) patched Jan 13 2026
Zoom Livestream Data Exfiltration T1125 T1530 Physical Impact

🛡 How to Defend Against This Chain

Require explicit user confirmation before any agentic tool execution involving external apps. LLM assistants should interrupt and surface a confirmation prompt — not silently infer intent — before invoking installed apps like Zoom, sending emails, or writing to long-term memory. Google deployed this as a specific mitigation after the Calendar attack disclosure. This single control would have broken the chain at Step 2.
Treat all LLM-retrieved content as untrusted input. Calendar titles, email subjects, document names, and web page content must be sanitised before being added to context — analogous to parameterised SQL queries. Deploy content-inspection layers that detect instruction-like patterns (imperative sentences, tool invocation syntax) in retrieved workspace artifacts before they reach the model context.
Audit and gate long-term memory writes. Alert on unexpected writes to AI persistent memory stores (ChatGPT Memory, Copilot, Gemini Saved Info). No prompt injection should be able to persist instructions without explicit user consent and a visible confirmation flow. Log all memory mutations with the source artifact that triggered them.
Apply least-privilege to every AI agent's tool permissions. A calendar assistant should not have permission to launch video applications, send emails, or control smart home devices by default. Scope-limit every tool an LLM agent can invoke — apply the same discipline used for IAM roles and OAuth scopes. Review and revoke over-broad agent permissions in Google Workspace, Microsoft 365 Copilot, and any enterprise AI integration.
Block parameter-to-prompt URL patterns at the network and DLP layer. The Reprompt attack (CVE-2026-24307) used a q= URL parameter to prefill a malicious Copilot prompt. Enforce Microsoft Purview DLP policies for Copilot, flag outbound requests where AI query parameters contain instruction-like text, and monitor for LLM sessions issuing sequential requests to external servers — the signature of C2 chain-request exfiltration.