# embeddings
23 articlestagged with “embeddings”
Embedding & Vector Security Assessment (Assessment)
Test your understanding of embedding inversion attacks, vector database security, similarity search manipulation, and privacy risks of stored embeddings with 10 questions.
RAG & Data Attack Assessment
Test your knowledge of Retrieval-Augmented Generation attack vectors, knowledge base poisoning, embedding manipulation, and data exfiltration through RAG systems with 10 intermediate-level questions.
Privacy Attacks on Embeddings
Recovering sensitive information from embedding vectors through inversion attacks, attribute inference, and reconstruction techniques.
RAG Pipeline Exploitation
Methodology for attacking Retrieval-Augmented Generation pipelines: knowledge poisoning, chunk boundary manipulation, retrieval score gaming, cross-tenant leakage, GraphRAG attacks, and metadata injection.
Knowledge Base Poisoning
Techniques for injecting adversarial documents into RAG knowledge bases: ingestion path analysis, embedding space attacks, SEO-style ranking manipulation, staged poisoning, and effectiveness measurement.
Semantic Similarity-Based Defense
Using embedding similarity analysis to detect adversarial inputs that are semantically close to known attack patterns.
Semantic Similarity-Based Defense (Defense Mitigation)
Using semantic similarity to detect prompt injection by comparing user inputs against known attack patterns.
Embedding Model Security Comparison
Security comparison of major embedding models — OpenAI, Cohere, sentence-transformers — covering vulnerability profiles, adversarial robustness, and privacy characteristics.
Embedding Privacy
What embeddings reveal about source data — covering embedding inversion attacks, membership inference, attribute inference, privacy-preserving embedding techniques, and regulatory implications.
Embedding & Vector Security
How embeddings create a hidden attack surface in AI systems: vector database security boundaries, embedding-level attacks, and RAG retrieval manipulation.
Embeddings & Vector Spaces for Red Teamers
Understand how embeddings encode semantic meaning, how vector operations work, and why red teamers need to understand embedding spaces for RAG attacks and similarity exploitation.
Lab: Exploring Embedding Spaces
Hands-on lab using Python to visualize embedding spaces, measure semantic similarity, and demonstrate how adversarial documents can be crafted to match target queries.
Foundations
Essential building blocks for AI red teaming, covering red team methodology, the AI landscape, how LLMs work, embeddings and vector systems, AI system architecture, and adversarial machine learning concepts.
Embedding Space Exploitation
Techniques for exploiting embedding geometry, performing inversion attacks, crafting adversarial perturbations, and poisoning RAG systems via nearest-neighbor manipulation.
LLM Internals
Deep technical exploration of LLM internal mechanisms for exploit development, covering activation analysis, alignment bypass primitives, and embedding space exploitation.
Lab: Embedding Fundamentals for Red Teamers
Learn embedding fundamentals including vector similarity, semantic search, and how embeddings enable RAG systems.
Lab: Embedding Space Manipulation
Hands-on lab for crafting documents optimized to be retrieved for specific queries through embedding collision attacks using sentence-transformers.
Embedding Similarity Attacks
Manipulate text to achieve target embedding similarity scores for retrieval poisoning.
Semantic Search Poisoning
Craft adversarial documents that rank highly in semantic search for targeted queries in RAG systems.
Simulation: RAG Pipeline Poisoning
Red team engagement simulation targeting a RAG-based knowledge management system, covering embedding injection, document poisoning, retrieval manipulation, and knowledge base exfiltration.
Embedding Space Attacks
Techniques for attacking the embedding layer of LLMs, including adversarial perturbations, embedding inversion, and semantic space manipulation.
RAG Retrieval Poisoning (Rag Data Attacks)
Techniques for poisoning RAG knowledge bases to inject malicious content into LLM context, including embedding manipulation, document crafting, and retrieval hijacking.
Semantic Similarity Detection
Step-by-step walkthrough for using text embeddings to detect semantically similar prompt injection attempts, covering embedding model selection, vector database setup, similarity threshold tuning, and production deployment.