# open-source
標記為「open-source」的 19 篇文章
Capstone: Security Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Open Source Contribution
Contribute to an open-source AI security project such as garak, PyRIT, or MITRE ATLAS, producing a merged PR or submitted issue with proof of concept.
Case Study: Open-Source Model Jailbreak Campaign
Analysis of coordinated jailbreak campaigns against open-source models and community response patterns.
Community Tool Spotlight Series
Monthly spotlight on community-developed AI red teaming tools and their usage.
Open vs Closed Models: Security Tradeoffs
Security implications of open-weight vs closed-source AI models — weight access, responsible deployment, fine-tuning risks, and the impact on red teaming strategy.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Model Merging Security Implications
Security analysis of model merging techniques and potential for backdoor propagation through merged models.
Open-Source Model Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
Simulation: Open Source AI Project Audit
Security audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.
Capstone: 安全 Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Open Source Contribution
Contribute to an open-source AI security project such as garak, PyRIT, or MITRE ATLAS, producing a merged PR or submitted issue with proof of concept.
Case Study: Open-Source 模型 越獄 Campaign
Analysis of coordinated jailbreak campaigns against open-source models and community response patterns.
Community 工具 Spotlight Series
Monthly spotlight on community-developed AI red teaming tools and their usage.
嵌入模型安全比較
跨嵌入模型的安全屬性比較——涵蓋 OpenAI、Cohere、Voyage、開源模型的反演抵抗力、隱私屬性與對抗性穩健性。
開放 vs 封閉模型:安全權衡
開放權重 vs 封閉原始碼 AI 模型之安全意涵——權重存取、負責任部署、微調風險,與對紅隊策略之影響。
Repository 投毒 for Code 模型s
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
模型 Merging 安全 Implications
安全 analysis of model merging techniques and potential for backdoor propagation through merged models.
Open-Source 模型 Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
Simulation: Open Source AI Project Audit
安全 audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.