# data-exfiltration
18 articlestagged with “data-exfiltration”
Exploiting Agent Tool Use
How to manipulate AI agents into calling tools with attacker-controlled parameters, abusing tool capabilities for data exfiltration, privilege escalation, and unauthorized actions.
Email Agent Exploitation
Techniques for exploiting AI agents that process, summarize, draft, and act on emails, including injection through email content, attachment-based attacks, and workflow manipulation.
File System Agent Risks
Security risks of AI agents with file system access, including path traversal exploitation, symlink attacks, file content injection, data exfiltration through file operations, and privilege escalation via file system manipulation.
Data Exfiltration Incident Response Playbook
Playbook for AI-mediated data exfiltration: identifying exposed data, determining exfiltration scope, data classification, breach notification procedures, and remediation.
Case Study: ChatGPT Plugin Security Vulnerabilities
Analysis of security vulnerabilities discovered in the ChatGPT plugin ecosystem, including OAuth hijacking, cross-plugin data exfiltration, and prompt injection through plugin responses.
Case Study: Indirect Prompt Injection in Email AI Assistants
Analysis of indirect prompt injection attacks targeting AI-powered email assistants, where adversarial instructions embedded in emails hijack the assistant's behavior to exfiltrate data, send unauthorized messages, or manipulate user actions.
Cross-Cloud Attack Scenarios
Red team attack scenarios spanning multiple cloud providers: credential pivoting between AWS, Azure, and GCP, data exfiltration across cloud boundaries, and model portability risks.
Fine-Tuning API Abuse
How fine-tuning APIs are abused to create uncensored models, circumvent content policies, and attempt training data exfiltration -- the gap between acceptable use policies and technical enforcement.
Attacking Experiment Tracking Systems
Techniques for exploiting experiment tracking platforms like MLflow, Weights & Biases, Neptune, and CometML, including data exfiltration, metric manipulation, experiment injection, and leveraging tracking metadata for reconnaissance.
Exfiltrating Data Through AI Telemetry and Logging
Using AI system telemetry, logging pipelines, and observability infrastructure as covert channels for data exfiltration
Basic Data Exfiltration Techniques
Extract sensitive information from LLM applications using social engineering and misdirection.
Lab: Markdown Injection
Inject images, links, and formatting into LLM responses that exfiltrate data or alter display rendering in chat interfaces.
Lab: Data Exfiltration Channels
Hands-on lab for extracting data from AI systems through markdown image rendering, invisible links, tool call parameters, and other covert exfiltration channels.
Lab: Data Exfiltration Channels (Intermediate Lab)
Extract sensitive information from AI systems through various exfiltration channels including crafted links, image tags, tool calls, and side-channel leakage.
Lab: Data Exfiltration Techniques
Hands-on lab for extracting sensitive data from AI systems including system prompt extraction, context leakage via markdown rendering, and URL-based data exfiltration.
Simulation: Enterprise Chatbot Engagement
Full red team engagement simulation targeting a customer-facing chatbot deployed by a fictional e-commerce company, covering reconnaissance, prompt injection, data exfiltration, and PII harvesting.
Data Harvesting via Injection
Using injection techniques to extract training data, system prompts, user data, and other sensitive information from LLM applications.
RAG System Red Team Engagement
Complete walkthrough for testing RAG applications: document injection, cross-scope retrieval exploitation, embedding manipulation, data exfiltration through retrieval, and chunk boundary attacks.