# code-generation
12 articlestagged with “code-generation”
Code Agent Manipulation
Techniques for manipulating AI agents that generate, execute, and review code, including injection through code context, repository poisoning, execution environment attacks, and code review manipulation.
Code Generation Security Assessment
Assessment on code assistant exploitation, insecure code generation, and code review AI attacks.
Code Generation Security Assessment (Assessment)
Test your knowledge of AI code generation security including coding assistant risks, suggestion poisoning, IDE integration threats, and secure AI-assisted development with 15 questions.
Advanced Code Generation Security Assessment
Advanced assessment on autonomous coding agents, sandbox escapes, and supply chain attacks.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Code Suggestion Poisoning
Overview of attacks that manipulate AI coding assistant suggestions through training data poisoning and inference-time context manipulation.
Model Types and Their Attack Surfaces
How text, vision, multimodal, embedding, and code generation models each present unique vulnerabilities and attack surfaces for red teamers.
Code Generation Model Attacks
Overview of security risks in AI-powered code generation: Copilot, Cursor, code completion models, IDE integration attack surfaces, and code-specific exploitation techniques.
CTF: Code Gen Exploit
Manipulate AI code generation to produce vulnerable, backdoored, or malicious code. Explore how prompt manipulation influences code security, from subtle vulnerability injection to full backdoor insertion.
Lab: Code Generation Security Testing
Test LLM code generation for insecure patterns, injection vulnerabilities, and code execution safety issues.