# embeddings
標記為「embeddings」的 43 篇文章
Embedding & Vector Security Assessment (Assessment)
Test your understanding of embedding inversion attacks, vector database security, similarity search manipulation, and privacy risks of stored embeddings with 10 questions.
RAG & Data Attack Assessment
Test your knowledge of Retrieval-Augmented Generation attack vectors, knowledge base poisoning, embedding manipulation, and data exfiltration through RAG systems with 10 intermediate-level questions.
Privacy Attacks on Embeddings
Recovering sensitive information from embedding vectors through inversion attacks, attribute inference, and reconstruction techniques.
RAG Pipeline Exploitation
Methodology for attacking Retrieval-Augmented Generation pipelines: knowledge poisoning, chunk boundary manipulation, retrieval score gaming, cross-tenant leakage, GraphRAG attacks, and metadata injection.
Knowledge Base Poisoning
Techniques for injecting adversarial documents into RAG knowledge bases: ingestion path analysis, embedding space attacks, SEO-style ranking manipulation, staged poisoning, and effectiveness measurement.
Semantic Similarity-Based Defense
Using embedding similarity analysis to detect adversarial inputs that are semantically close to known attack patterns.
Semantic Similarity-Based Defense (Defense Mitigation)
Using semantic similarity to detect prompt injection by comparing user inputs against known attack patterns.
Embedding Model Security Comparison
Security comparison of major embedding models — OpenAI, Cohere, sentence-transformers — covering vulnerability profiles, adversarial robustness, and privacy characteristics.
Embedding Privacy
What embeddings reveal about source data — covering embedding inversion attacks, membership inference, attribute inference, privacy-preserving embedding techniques, and regulatory implications.
Embedding & Vector Security
How embeddings create a hidden attack surface in AI systems: vector database security boundaries, embedding-level attacks, and RAG retrieval manipulation.
Embeddings & Vector Spaces for Red Teamers
Understand how embeddings encode semantic meaning, how vector operations work, and why red teamers need to understand embedding spaces for RAG attacks and similarity exploitation.
Lab: Exploring Embedding Spaces
Hands-on lab using Python to visualize embedding spaces, measure semantic similarity, and demonstrate how adversarial documents can be crafted to match target queries.
Foundations
Essential building blocks for AI red teaming, covering red team methodology, the AI landscape, how LLMs work, embeddings and vector systems, AI system architecture, and adversarial machine learning concepts.
Embedding Space Exploitation
Techniques for exploiting embedding geometry, performing inversion attacks, crafting adversarial perturbations, and poisoning RAG systems via nearest-neighbor manipulation.
LLM Internals
Deep technical exploration of LLM internal mechanisms for exploit development, covering activation analysis, alignment bypass primitives, and embedding space exploitation.
Lab: Embedding Fundamentals for Red Teamers
Learn embedding fundamentals including vector similarity, semantic search, and how embeddings enable RAG systems.
Lab: Embedding Space Manipulation
Hands-on lab for crafting documents optimized to be retrieved for specific queries through embedding collision attacks using sentence-transformers.
Embedding Similarity Attacks
Manipulate text to achieve target embedding similarity scores for retrieval poisoning.
Semantic Search Poisoning
Craft adversarial documents that rank highly in semantic search for targeted queries in RAG systems.
Simulation: RAG Pipeline Poisoning
Red team engagement simulation targeting a RAG-based knowledge management system, covering embedding injection, document poisoning, retrieval manipulation, and knowledge base exfiltration.
Embedding Space Attacks
Techniques for attacking the embedding layer of LLMs, including adversarial perturbations, embedding inversion, and semantic space manipulation.
RAG Retrieval Poisoning (Rag Data Attacks)
Techniques for poisoning RAG knowledge bases to inject malicious content into LLM context, including embedding manipulation, document crafting, and retrieval hijacking.
Semantic Similarity Detection
Step-by-step walkthrough for using text embeddings to detect semantically similar prompt injection attempts, covering embedding model selection, vector database setup, similarity threshold tuning, and production deployment.
章節評量:嵌入向量
15 題校準評量,測試你對嵌入向量與向量安全的理解。
Privacy 攻擊s on Embeddings
Recovering sensitive information from embedding vectors through inversion attacks, attribute inference, and reconstruction techniques.
RAG Pipeline 利用ation
Methodology for attacking Retrieval-Augmented Generation pipelines: knowledge poisoning, chunk boundary manipulation, retrieval score gaming, cross-tenant leakage, GraphRAG attacks, and metadata injection.
Knowledge Base 投毒
Techniques for injecting adversarial documents into RAG knowledge bases: ingestion path analysis, embedding space attacks, SEO-style ranking manipulation, staged poisoning, and effectiveness measurement.
Semantic Similarity-Based 防禦
Using embedding similarity analysis to detect adversarial inputs that are semantically close to known attack patterns.
Semantic Similarity-Based 防禦 (防禦 Mitigation)
Using semantic similarity to detect prompt injection by comparing user inputs against known attack patterns.
嵌入模型安全比較
跨嵌入模型的安全屬性比較——涵蓋 OpenAI、Cohere、Voyage、開源模型的反演抵抗力、隱私屬性與對抗性穩健性。
嵌入與向量安全
嵌入如何在 AI 系統中建立隱藏的攻擊面:向量資料庫安全邊界、嵌入層級攻擊與 RAG 檢索操控。
給紅隊員的嵌入向量與向量空間
理解嵌入向量如何編碼語意意義、向量運算如何運作,以及為何紅隊員需要理解嵌入空間以進行 RAG 攻擊與相似性利用。
實作:探索 Embedding 空間
使用 Python 視覺化 embedding 空間、量測語意相似度,並展示如何打造與目標查詢匹配之對抗文件之實作。
基礎
AI 紅隊演練的核心建構區塊,涵蓋紅隊方法論、AI 景觀、大型語言模型如何運作、嵌入向量與向量系統、AI 系統架構,以及對抗性機器學習概念。
Embedding 空間利用
利用 embedding 幾何、執行反轉攻擊、打造對抗擾動,以及經由最近鄰操弄投毒 RAG 系統之技術。
大型語言模型內部結構
為利用開發深入探索大型語言模型內部機制的技術,涵蓋激活分析、對齊繞過原語與嵌入空間利用。
實驗室: Embedding 基礎 for 紅隊ers
Learn embedding fundamentals including vector similarity, semantic search, and how embeddings enable RAG systems.
實作:Embedding 空間操弄
使用 sentence-transformers 經 embedding 碰撞攻擊打造為特定查詢被檢索而最佳化之文件之實作。
Embedding Similarity 攻擊s
Manipulate text to achieve target embedding similarity scores for retrieval poisoning.
Semantic Search 投毒
Craft adversarial documents that rank highly in semantic search for targeted queries in RAG systems.
模擬:RAG 管線投毒
針對以 RAG 為本之知識管理系統之紅隊委任模擬,涵蓋 embedding 注入、文件投毒、檢索操弄與知識庫外洩。
嵌入操控
攻擊模型表示意義的向量空間——涵蓋對抗性嵌入打造、嵌入空間投毒與語意碰撞攻擊。
Semantic Similarity Detection
Step-by-step walkthrough for using text embeddings to detect semantically similar prompt injection attempts, covering embedding model selection, vector database setup, similarity threshold tuning, and production deployment.