# vulnerability
標記為「vulnerability」的 24 篇文章
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: MCP Security Vulnerability Disclosure
Analysis of early MCP security vulnerability discoveries including tool poisoning and transport security issues.
AI Bug Bounty Programs
Comprehensive guide to active AI bug bounty programs from OpenAI, Anthropic, Google, and platform providers. Scope definitions, reward ranges, submission best practices, and AI-specific vulnerability categories.
Automated Vulnerability Discovery
Building automated tools for discovering novel vulnerabilities in LLM applications.
Garak: LLM Vulnerability Scanner
Deep dive into NVIDIA's Garak LLM vulnerability scanner: architecture, probes, generators, evaluators, custom probe development, and CI/CD integration for automated security scanning.
PEFT Vulnerability Analysis
Security analysis of Parameter-Efficient Fine-Tuning methods beyond LoRA.
Ethics & Responsible Disclosure
Ethical frameworks for AI red teaming, responsible disclosure processes for AI vulnerabilities, when and how to report findings, and navigating bug bounty programs.
Dependency Scanning for AI/ML
Defense-focused guide to scanning AI/ML dependencies for vulnerabilities, covering AI-specific dependency risks, malicious package detection, automated scanning pipelines, and policy enforcement for ML toolchains.
CTF: Code Gen Exploit
Manipulate AI code generation to produce vulnerable, backdoored, or malicious code. Explore how prompt manipulation influences code security, from subtle vulnerability injection to full backdoor insertion.
Technical Findings Documentation
How to document AI-specific vulnerabilities: reproduction steps, severity assessment with AI-adapted frameworks, remediation recommendations, and finding templates.
AI Vulnerability Classification System
Structured system for classifying AI-specific vulnerabilities by type, impact, and exploitability.
Classifying AI Vulnerability Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: MCP 安全 漏洞 Disclosure
Analysis of early MCP security vulnerability discoveries including tool poisoning and transport security issues.
AI 漏洞賞金計畫
OpenAI、Anthropic、Google 與平台提供者之活躍 AI 漏洞賞金計畫完整指南。範圍定義、獎勵範圍、提交最佳實踐,與 AI 特定漏洞類別。
Automated 漏洞 Discovery
Building automated tools for discovering novel vulnerabilities in LLM applications.
Garak:LLM 漏洞掃描器
深入 NVIDIA 之 Garak LLM 漏洞掃描器:架構、探測、生成器、評估器、自訂探測開發,與自動化安全掃描之 CI/CD 整合。
PEFT 漏洞 Analysis
安全 analysis of Parameter-Efficient Fine-Tuning methods beyond LoRA.
倫理與負責任揭露
AI 紅隊的倫理框架、AI 漏洞的負責任揭露流程、何時與如何回報發現,以及處理漏洞賞金計畫。
Dependency Scanning for AI/ML
防禦-focused guide to scanning AI/ML dependencies for vulnerabilities, covering AI-specific dependency risks, malicious package detection, automated scanning pipelines, and policy enforcement for ML toolchains.
CTF:程式碼生成攻擊
操弄 AI 程式碼生成使其產出有漏洞、含後門或惡意的程式碼。從細微的漏洞注入到完整後門植入,探索提示操弄如何影響程式碼安全。
技術發現文件
如何記錄 AI 特定漏洞:重現步驟、使用適用於 AI 的嚴重性框架進行評估、修復建議,以及發現範本。
AI 漏洞 Classification System
Structured system for classifying AI-specific vulnerabilities by type, impact, and exploitability.
Classifying AI 漏洞 Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.