# open-source
9 articlestagged with “open-source”
Capstone: Security Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Open Source Contribution
Contribute to an open-source AI security project such as garak, PyRIT, or MITRE ATLAS, producing a merged PR or submitted issue with proof of concept.
Case Study: Open-Source Model Jailbreak Campaign
Analysis of coordinated jailbreak campaigns against open-source models and community response patterns.
Community Tool Spotlight Series
Monthly spotlight on community-developed AI red teaming tools and their usage.
Open vs Closed Models: Security Tradeoffs
Security implications of open-weight vs closed-source AI models — weight access, responsible deployment, fine-tuning risks, and the impact on red teaming strategy.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Model Merging Security Implications
Security analysis of model merging techniques and potential for backdoor propagation through merged models.
Open-Source Model Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
Simulation: Open Source AI Project Audit
Security audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.