# exploit-primitives
標記為「exploit-primitives」的 9 篇文章
LLM Internals for Exploit Developers
Transformer architecture, tokenizer internals, logit pipelines, and trust boundaries from an offensive security perspective.
Exploiting Attention Mechanisms
How the self-attention mechanism in transformers can be leveraged to steer model behavior, hijack information routing, and bypass safety instructions.
Embedding Space Attacks
Techniques for attacking the embedding layer of LLMs, including adversarial perturbations, embedding inversion, and semantic space manipulation.
LLM Internals & Exploit Primitives
An overview of large language model architecture from a security researcher's perspective, covering the key components that create exploitable attack surfaces.
Tokenization-Based Attacks
How tokenizer behavior creates exploitable gaps between human-readable text and model-internal representations, enabling filter bypass and payload obfuscation.
LLM Internals for 利用 Developers
Transformer architecture, tokenizer internals, logit pipelines, and trust boundaries from an offensive security perspective.
注意力利用
利用 transformer 注意力機制引導模型行為——涵蓋注意力稀釋、位置偏誤利用、注意力劫持與上下文視窗操控。
大型語言模型內部與利用原語
從安全研究員視角出發的大型語言模型架構概覽,涵蓋建立可利用攻擊面的關鍵元件。
基於分詞的攻擊
分詞器行為如何在人類可讀文字與模型內部表示之間建立可利用落差,使過濾器繞過與 payload 混淆成為可能。