# cross-model
14 articlestagged with “cross-model”
Cross-Model Transfer Assessment
Assessment of attack transferability across model families, versions, and providers.
August 2026: Multi-Model Boss Rush
Chain attacks across GPT-4, Claude, and Gemini in a complex multi-model system, exploiting trust boundaries and handoff points between models.
Developing Transferable Attacks
Cross-model attack techniques, measuring transferability, ensemble optimization, and practical transfer testing methodologies for AI red teams.
Injection Transferability Research
Research on how prompt injection techniques transfer across different model families and sizes.
Cross-Model Transfer Attacks
Develop attacks on open-source models that transfer to closed-source commercial APIs.
Lab: Cross-Model Transfer Attacks
Test whether jailbreaks discovered on one language model transfer effectively to others, building a systematic methodology for cross-model vulnerability research.
Differential Testing Across Models
Use differential testing to find behavior inconsistencies across model providers.
Lab: Transfer Attack Development
Hands-on lab for crafting adversarial prompts on open-weight models like Llama that transfer to closed-source models like Claude and GPT-4, using iterative refinement and cross-model evaluation.
Lab: Transfer Attack Development (Advanced Lab)
Develop adversarial attacks on open-source models that transfer to closed-source models, leveraging weight access for black-box exploitation.
Cross-Model GCG Transfer Attacks
Generate adversarial suffixes on open-source models and test their transferability to commercial APIs.
Cross-Model Comparison
Methodology for systematically comparing LLM security across model families, including standardized evaluation frameworks, architectural difference analysis, and comparative testing approaches.
Jailbreak Portability
Analysis of which jailbreaks transfer across models and why, including universal vs model-specific techniques, transfer attack methodology, and factors that determine portability.
Safety Comparison Across Models
Comparing safety across GPT-4, Claude, Gemini, and open-weight models using standardized test suites, failure mode analysis, and defense coverage gap identification.
Tokenizer Vulnerabilities Across Models
Comprehensive analysis of tokenizer vulnerabilities across major model families.