# persistence
標記為「persistence」的 41 篇文章
Agent Memory Poisoning
Techniques for injecting malicious content into agent memory systems -- conversation history, RAG stores, and vector databases -- to achieve persistent cross-session compromise.
Rogue and Shadow Agents
How compromised, misaligned, or unauthorized AI agents operate within systems -- rogue agents that act harmfully while appearing legitimate, and shadow agents deployed without security review.
Agent Memory Poisoning
Techniques for poisoning AI agent short-term and long-term memory systems to achieve persistent compromise, inject behavioral backdoors, and survive conversation resets.
Cross-Session Attack Persistence
Achieving attack persistence across separate agent sessions through memory manipulation.
Memory Compaction Exploitation
Exploiting memory summarization and compaction processes to persist adversarial instructions across compression cycles.
Memory Deletion Prevention Attacks
Techniques for making adversarial memories resistant to cleanup, deletion, and purging operations.
Memory Poisoning Techniques
Advanced techniques for injecting persistent instructions into AI agent memory systems, including semantic trojans, self-reinforcing payloads, dormant backdoors, and cross-session persistence mechanisms.
Backdoor Trigger Design
Methodology for designing effective backdoor triggers for LLMs, covering trigger taxonomy, poison rate optimization, trigger-target mapping, multi-trigger systems, evaluation evasion, and persistence through fine-tuning.
Checkpoint Manipulation Attacks
Intercepting and modifying model checkpoints during the fine-tuning process to inject persistent backdoors or remove safety properties.
Agent Memory Injection for Persistent Access
Inject persistent instructions into agent memory systems that survive across conversation sessions.
Lab: Agent Memory Manipulation
Hands-on lab for injecting persistent instructions into an agent's memory and context that affect future interactions and conversations.
Lab: Multi-Turn Attack Campaigns
Hands-on lab for executing multi-turn crescendo attacks against LLMs, measuring safety degradation over conversation length, and building persistent attack campaigns.
Conversation Steering
Techniques for gradually redirecting conversation context toward attack objectives without triggering safety mechanisms.
Cross-Context Injection (Prompt Injection)
Prompt injection techniques that persist across context boundaries: surviving conversation resets, session switches, memory boundaries, and multi-agent handoffs.
Persona Establishment
Creating persistent alternate identities that survive across conversation turns, including character locking, identity anchoring, and progressive persona building.
Adversarial Persistence Mechanisms
Techniques for maintaining persistent access to AI systems including conversation memory manipulation, cached response poisoning, and model weight persistence.
Persistence in AI Systems
Achieving persistent access and influence in AI systems through memory, fine-tuning, and context manipulation.
Memory Persistence Attack Walkthrough
Walkthrough of achieving persistent memory manipulation in agent systems for cross-session influence.
Memory Poisoning Step by Step
Walkthrough of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
Memory Injection and Persistence Walkthrough
Walkthrough of injecting persistent instructions into agent memory systems that survive across sessions.
代理 記憶體 投毒
Techniques for injecting malicious content into agent memory systems -- conversation history, RAG stores, and vector databases -- to achieve persistent cross-session compromise.
Rogue and Shadow 代理s
How compromised, misaligned, or unauthorized AI agents operate within systems -- rogue agents that act harmfully while appearing legitimate, and shadow agents deployed without security review.
代理記憶投毒
投毒 AI 代理短期與長期記憶系統的技術,以達成持久入侵、注入行為後門,並於會話重置後存活。
Cross-Session 攻擊 Persistence
Achieving attack persistence across separate agent sessions through memory manipulation.
代理記憶體系統安全
針對持久代理記憶體儲存的攻擊——涵蓋記憶體投毒、上下文操控、外洩攻擊與自強化記憶體 payload。
記憶體 Compaction 利用ation
利用ing memory summarization and compaction processes to persist adversarial instructions across compression cycles.
記憶體 Deletion Prevention 攻擊s
Techniques for making adversarial memories resistant to cleanup, deletion, and purging operations.
記憶投毒技術
將持久指令注入 AI 代理記憶系統之進階技術,包括語意木馬、自我強化 payload、休眠後門,以及跨會話持久化機制。
Backdoor Trigger Design
Methodology for designing effective backdoor triggers for LLMs, covering trigger taxonomy, poison rate optimization, trigger-target mapping, multi-trigger systems, evaluation evasion, and persistence through fine-tuning.
Checkpoint Manipulation 攻擊s
Intercepting and modifying model checkpoints during the fine-tuning process to inject persistent backdoors or remove safety properties.
代理 記憶體 Injection for Persistent Access
Inject persistent instructions into agent memory systems that survive across conversation sessions.
實驗室: 代理 記憶體 Manipulation
Hands-on lab for injecting persistent instructions into an agent's memory and context that affect future interactions and conversations.
實驗室: Multi-Turn 攻擊 Campaigns
Hands-on lab for executing multi-turn crescendo attacks against LLMs, measuring safety degradation over conversation length, and building persistent attack campaigns.
對話引導
在不觸發安全機制下逐步將對話上下文重導向攻擊目標的技術。
Cross-Context Injection (提示詞注入)
Prompt injection techniques that persist across context boundaries: surviving conversation resets, session switches, memory boundaries, and multi-agent handoffs.
人格建立
建立跨對話輪次存活並抵擋回復預設行為之持久另一身份,包含角色鎖定、身份錨定與漸進式人格建構。
Adversarial Persistence Mechanisms
Techniques for maintaining persistent access to AI systems including conversation memory manipulation, cached response poisoning, and model weight persistence.
Persistence in AI Systems
Achieving persistent access and influence in AI systems through memory, fine-tuning, and context manipulation.
記憶體 Persistence 攻擊 導覽
導覽 of achieving persistent memory manipulation in agent systems for cross-session influence.
記憶體 投毒 Step by Step
導覽 of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
記憶體 Injection and Persistence 導覽
導覽 of injecting persistent instructions into agent memory systems that survive across sessions.