# code-generation
標記為「code-generation」的 23 篇文章
Code Agent Manipulation
Techniques for manipulating AI agents that generate, execute, and review code, including injection through code context, repository poisoning, execution environment attacks, and code review manipulation.
Code Generation Security Assessment
Assessment on code assistant exploitation, insecure code generation, and code review AI attacks.
Code Generation Security Assessment (Assessment)
Test your knowledge of AI code generation security including coding assistant risks, suggestion poisoning, IDE integration threats, and secure AI-assisted development with 15 questions.
Advanced Code Generation Security Assessment
Advanced assessment on autonomous coding agents, sandbox escapes, and supply chain attacks.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Code Suggestion Poisoning
Overview of attacks that manipulate AI coding assistant suggestions through training data poisoning and inference-time context manipulation.
Model Types and Their Attack Surfaces
How text, vision, multimodal, embedding, and code generation models each present unique vulnerabilities and attack surfaces for red teamers.
Code Generation Model Attacks
Overview of security risks in AI-powered code generation: Copilot, Cursor, code completion models, IDE integration attack surfaces, and code-specific exploitation techniques.
CTF: Code Gen Exploit
Manipulate AI code generation to produce vulnerable, backdoored, or malicious code. Explore how prompt manipulation influences code security, from subtle vulnerability injection to full backdoor insertion.
Lab: Code Generation Security Testing
Test LLM code generation for insecure patterns, injection vulnerabilities, and code execution safety issues.
Code 代理 Manipulation
Techniques for manipulating AI agents that generate, execute, and review code, including injection through code context, repository poisoning, execution environment attacks, and code review manipulation.
Code Generation 安全 評量
評量 on code assistant exploitation, insecure code generation, and code review AI attacks.
章節評量:程式碼生成安全
15 題校準評量,測試你對 AI 程式碼生成安全的理解——建議投毒、訓練資料萃取與 IDE 風險。
進階 Code Generation 安全 評量
進階 assessment on autonomous coding agents, sandbox escapes, and supply chain attacks.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: 訓練 Data 投毒 in Code Generation 模型s
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
程式碼生成安全
AI 程式設計助理如何透過建議投毒、訓練資料萃取、不安全程式碼生成與 IDE 擴充功能風險引入安全漏洞。
程式碼建議投毒
透過訓練資料投毒與推論期上下文操控來操控 AI 程式設計助理建議的攻擊概覽。
模型類型與其攻擊面
文字、視覺、多模態、embedding 與程式碼生成模型如何各呈現紅隊員獨特之漏洞與攻擊面。
CTF:程式碼生成攻擊
操弄 AI 程式碼生成使其產出有漏洞、含後門或惡意的程式碼。從細微的漏洞注入到完整後門植入,探索提示操弄如何影響程式碼安全。
實驗室: Code Generation 安全 Testing
Test LLM code generation for insecure patterns, injection vulnerabilities, and code execution safety issues.