# code-gen
22 articlestagged with “code-gen”
AI Code Review Bypass Techniques
Techniques for crafting code changes that evade AI-powered security review tools while introducing vulnerabilities or backdoors.
AI-Generated Dependency Confusion
Exploiting LLM tendency to hallucinate package names for dependency confusion attacks.
AI Pair Programming Attacks
Attack vectors specific to AI pair programming workflows including suggestion manipulation, context injection, and trust exploitation.
CI/CD Code Generation Risks (Code Gen Security)
Security risks of AI-generated code in CI/CD pipelines including automated merge attacks, test generation manipulation, and pipeline injection.
Code Agent Sandbox Escape
Techniques for escaping sandboxed code execution environments in AI code agents.
Code Assistant Exploitation Techniques
Techniques for exploiting AI code assistants to generate insecure code or leak repository information.
Code Completion Data Exfiltration
Using code completion interfaces to exfiltrate sensitive data from development environments including secrets, API keys, and proprietary code.
Code Review AI Manipulation
Manipulating AI code review systems to approve vulnerable code or miss security issues.
Code Translation Attack Vectors
Exploiting AI code translation to introduce vulnerabilities during language migration.
Codebase Context Poisoning
Poisoning repository files that AI coding assistants use for context to influence code suggestions across the entire development team.
Commit Message Injection Attacks
Using crafted commit messages to inject adversarial instructions into AI code review tools that process git history for context.
Copilot Injection Attacks
Prompt injection through repository context that influences code generation suggestions.
Dependency Suggestion Attacks
Manipulating AI coding assistants to suggest malicious dependencies, typosquatted packages, or vulnerable library versions.
Autonomous Coding Agent Security
Security analysis of autonomous coding agents like Devin, including scope creep and unintended actions.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
IDE Plugin Injection Attacks
Exploiting IDE-integrated AI coding assistants through workspace context poisoning, configuration manipulation, and extension-based injection vectors.
Insecure Code Generation Patterns
Common patterns of insecure code generated by LLMs including injection, authentication, and crypto flaws.
Multi-File Context Attacks
Exploiting how AI coding assistants process multi-file context to create distributed injection payloads across repository files.
PR Review AI Manipulation
Techniques for manipulating AI-powered code review tools to approve malicious changes or miss security vulnerabilities.
Repository Context Poisoning
Poisoning repository context (README, comments, issues) to influence code generation behavior.
Test Generation Exploitation
Manipulating AI test generation to produce tests that pass but miss critical vulnerabilities.
Advanced Test Generation Manipulation
Advanced techniques for manipulating AI-generated tests to create false assurance by generating tests that pass but don't verify security properties.