# copilot
14 articlestagged with “copilot”
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
AI Pair Programming Attacks
Attack vectors specific to AI pair programming workflows including suggestion manipulation, context injection, and trust exploitation.
GitHub Copilot Attacks
Attack techniques targeting GitHub Copilot: suggestion manipulation via repository poisoning, context window injection, training data extraction, and proxy-based interception.
AI Coding Assistant Landscape
Overview of major AI coding assistants including GitHub Copilot, Cursor, Claude Code, Windsurf, and Cody, with analysis of their architectures and attack surfaces.
Copilot Injection Attacks
Prompt injection through repository context that influences code generation suggestions.
Copilot Workspace Security Analysis
Security evaluation of GitHub Copilot Workspace, analyzing attack surfaces in AI-driven multi-file code generation and planning.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Copilot/Cursor IDE Exploitation
Exploiting IDE-integrated AI code assistants: repository context poisoning, malicious comments that steer suggestions, data exfiltration through code completions, and prompt injection via file content.
Code Generation Model Attacks
Overview of security risks in AI-powered code generation: Copilot, Cursor, code completion models, IDE integration attack surfaces, and code-specific exploitation techniques.
Simulation: Code Assistant Security Review
Red team simulation targeting an AI code assistant, testing for code injection, credential leakage, supply chain poisoning, and unsafe code generation.
Data Analytics Copilot Assessment
Red team a data analytics copilot with SQL generation capabilities and access to enterprise databases.
Full Engagement: AI Security Copilot
Red team engagement of an AI security copilot with access to SIEM, vulnerability scanners, and threat intelligence.