# model-extraction
19 articlestagged with “model-extraction”
Model Extraction & Privacy Assessment
Test your advanced knowledge of model extraction, model stealing, membership inference, and intellectual property theft attacks against AI systems with 9 questions.
Case Study: Real-World Model Extraction
Analysis of documented model extraction attacks against commercial ML APIs.
Data & Training Security
Security vulnerabilities in the AI data pipeline, covering RAG exploitation, training data attacks, model extraction and intellectual property theft, and privacy attacks against deployed models.
API-Based Model Extraction
Deep dive into extracting proprietary model capabilities through systematic API querying, active learning strategies, logprob exploitation, soft-label distillation, and evasion of query anomaly detection systems.
Model Extraction & IP Theft
Methodology for black-box model extraction, API-based distillation, side-channel extraction, watermark removal, and model fingerprinting bypass targeting deployed AI systems.
Side-Channel Model Attacks
Deep dive into inferring model architecture, size, and deployment details through timing analysis, cache-based attacks, power/electromagnetic side channels, embedding endpoint exploitation, and architecture fingerprinting.
Watermark & Fingerprint Evasion
Deep dive into detecting and removing output watermarks, degrading weight watermarks, evading model fingerprinting, building provenance-stripping pipelines, and understanding the legal landscape of model ownership verification.
Embedding Model Extraction
Extracting embedding model behavior through systematic API querying.
Algorithmic Trading AI Attacks
Attack techniques for AI-powered trading systems including market manipulation via adversarial inputs, model extraction from trading APIs, flash crash induction, and sentiment analysis poisoning.
Medical Imaging AI Attacks
Adversarial attacks on medical imaging AI systems including perturbations on X-rays, CT scans, and MRIs, GAN-based fake medical image generation, and model extraction from diagnostic imaging APIs.
Advanced Rate Limiting Strategies for LLM API Endpoints
Designing, attacking, and defending rate limiting systems for LLM inference APIs to prevent abuse, model extraction, and resource exhaustion
Model Extraction via API Access
Extract a functionally equivalent model using only API query access.
Model Extraction via Knowledge Distillation
Extract a functionally equivalent model from a commercial API using systematic distillation queries.
Lab: Basic Model Extraction
Hands-on lab for API-based model extraction attacks, querying a target model to approximate its behavior, measuring fidelity, and understanding query budgets.
Model Extraction from Multimodal Systems
Techniques for extracting model capabilities, weights, and architecture details from multimodal AI systems through visual, audio, and cross-modal query strategies.
Extracting Training Data
Techniques for extracting memorized training data, system prompts, and private information from LLMs through targeted querying and membership inference attacks.
Distillation-Based Model Extraction
Using knowledge distillation for model theft: student-teacher extraction attacks, API-based distillation, task-specific extraction, and defending against distillation-based model stealing.
Model Extraction Attack Walkthrough
Walkthrough of extracting model weights/behavior through systematic API querying.
AWS SageMaker Red Teaming
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.