# suggestion-poisoning
標記為「suggestion-poisoning」的 6 篇文章
Code Generation Security Assessment (Assessment)
Test your knowledge of AI code generation security including coding assistant risks, suggestion poisoning, IDE integration threats, and secure AI-assisted development with 15 questions.
assessmentcode-generationcoding-assistantssuggestion-poisoningsecure-development
GitHub Copilot Attacks
Attack techniques targeting GitHub Copilot: suggestion manipulation via repository poisoning, context window injection, training data extraction, and proxy-based interception.
copilotgithubsuggestion-poisoningcontext-injectiontraining-data-extractionproxy-interception
Code Suggestion Poisoning
Overview of attacks that manipulate AI coding assistant suggestions through training data poisoning and inference-time context manipulation.
suggestion-poisoningtraining-datacontext-manipulationsupply-chaincode-generation
GitHub Copilot 攻擊
針對 GitHub Copilot 之攻擊技術:經儲存庫投毒之建議操弄、脈絡視窗注入、訓練資料提取,與以代理為本之攔截。
copilotgithubsuggestion-poisoningcontext-injectiontraining-data-extractionproxy-interception
程式碼建議投毒
透過訓練資料投毒與推論期上下文操控來操控 AI 程式設計助理建議的攻擊概覽。
suggestion-poisoningtraining-datacontext-manipulationsupply-chaincode-generation
程式碼生成模型安全研究
程式碼生成模型的前沿安全研究——涵蓋 Copilot 利用、建議投毒、儲存庫投毒與 AI 驅動開發工具安全。
code-modelscopilotsuggestion-poisoningresearch