# backdoors
4 articlestagged with “backdoors”
Agent Memory Poisoning
Techniques for poisoning AI agent short-term and long-term memory systems to achieve persistent compromise, inject behavioral backdoors, and survive conversation resets.
Memory Poisoning Techniques
Advanced techniques for injecting persistent instructions into AI agent memory systems, including semantic trojans, self-reinforcing payloads, dormant backdoors, and cross-session persistence mechanisms.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Training Data Manipulation
Attacks that corrupt model behavior by poisoning training data, fine-tuning datasets, or RLHF preference data, including backdoor installation and safety alignment removal.