# copilot
標記為「copilot」的 28 篇文章
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
AI Pair Programming Attacks
Attack vectors specific to AI pair programming workflows including suggestion manipulation, context injection, and trust exploitation.
GitHub Copilot Attacks
Attack techniques targeting GitHub Copilot: suggestion manipulation via repository poisoning, context window injection, training data extraction, and proxy-based interception.
AI Coding Assistant Landscape
Overview of major AI coding assistants including GitHub Copilot, Cursor, Claude Code, Windsurf, and Cody, with analysis of their architectures and attack surfaces.
Copilot Injection Attacks
Prompt injection through repository context that influences code generation suggestions.
Copilot Workspace Security Analysis
Security evaluation of GitHub Copilot Workspace, analyzing attack surfaces in AI-driven multi-file code generation and planning.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Copilot/Cursor IDE Exploitation
Exploiting IDE-integrated AI code assistants: repository context poisoning, malicious comments that steer suggestions, data exfiltration through code completions, and prompt injection via file content.
Code Generation Model Attacks
Overview of security risks in AI-powered code generation: Copilot, Cursor, code completion models, IDE integration attack surfaces, and code-specific exploitation techniques.
Simulation: Code Assistant Security Review
Red team simulation targeting an AI code assistant, testing for code injection, credential leakage, supply chain poisoning, and unsafe code generation.
Data Analytics Copilot Assessment
Red team a data analytics copilot with SQL generation capabilities and access to enterprise databases.
Full Engagement: AI Security Copilot
Red team engagement of an AI security copilot with access to SIEM, vulnerability scanners, and threat intelligence.
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: 訓練 Data 投毒 in Code Generation 模型s
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
AI Pair Programming 攻擊s
攻擊 vectors specific to AI pair programming workflows including suggestion manipulation, context injection, and trust exploitation.
GitHub Copilot 攻擊
針對 GitHub Copilot 之攻擊技術:經儲存庫投毒之建議操弄、脈絡視窗注入、訓練資料提取,與以代理為本之攔截。
AI 程式設計助理
AI 程式設計助理的安全版圖——涵蓋 GitHub Copilot、Cursor、Claude Code 與其他工具的攻擊面、供應鏈風險與安全評估方法。
Copilot Injection 攻擊s
Prompt injection through repository context that influences code generation suggestions.
Copilot Workspace 安全 Analysis
安全 evaluation of GitHub Copilot Workspace, analyzing attack surfaces in AI-driven multi-file code generation and planning.
程式碼生成安全
AI 程式設計助理如何透過建議投毒、訓練資料萃取、不安全程式碼生成與 IDE 擴充功能風險引入安全漏洞。
Copilot/Cursor IDE 利用ation
利用ing IDE-integrated AI code assistants: repository context poisoning, malicious comments that steer suggestions, data exfiltration through code completions, and prompt injection via file content.
程式碼生成模型安全研究
程式碼生成模型的前沿安全研究——涵蓋 Copilot 利用、建議投毒、儲存庫投毒與 AI 驅動開發工具安全。
模擬:程式碼助理安全審查
針對 AI 程式碼助理的紅隊模擬,測試程式碼注入、憑證洩漏、供應鏈投毒與不安全程式碼生成。
Data Analytics Copilot 評量
Red team a data analytics copilot with SQL generation capabilities and access to enterprise databases.
Full Engagement: AI 安全 Copilot
Red team engagement of an AI security copilot with access to SIEM, vulnerability scanners, and threat intelligence.