# code-models
4 articlestagged with “code-models”
Training Data Attacks on Code Models
Poisoning training data for code generation models: inserting vulnerable patterns into popular repositories, dependency confusion via suggestions, and trojan code patterns.
Training Data Extraction from Code Models
Techniques for recovering proprietary code from code generation model weights — covering memorization detection, targeted extraction, membership inference, and defensive countermeasures.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Frontier Research
Cutting-edge AI security research covering reasoning model attacks, code generation security, computer use agents, AI-powered red teaming, robotics and embodied AI, and alignment faking.