# ollama
標記為「ollama」的 20 篇文章
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Lab: Your First Jailbreak
Try basic jailbreak techniques against a local model using Ollama, learning the difference between prompt injection and jailbreaking through hands-on experimentation.
Lab: Setting Up Ollama for Local LLM Testing
Install and configure Ollama for local LLM red teaming, download models, perform basic interactions, and compare safety behavior between local and API-hosted models.
Lab Setup: Ollama, vLLM & Docker Compose
Complete lab setup guide for AI red teaming: local model serving with Ollama and vLLM, GPU configuration, Docker Compose for multi-service testing environments.
Ollama Security Testing Walkthrough
Complete walkthrough for security testing locally-hosted models with Ollama: comparing safety across models, testing system prompt extraction, API security assessment, and Modelfile configuration hardening.
Testing Ollama Local Deployments
Security testing guide for locally deployed models via Ollama including network exposure and API security.
Testing Ollama Local Deployments (Platform Walkthrough)
Red team testing guide for models deployed locally via Ollama including API endpoints and model management.
Tool Walkthroughs
End-to-end practical walkthroughs for essential AI red teaming tools, covering installation, configuration, execution, and result interpretation.
Local Model Analysis and Testing with Ollama
Walkthrough for using Ollama to run, analyze, and security-test local LLMs, covering model configuration, safety boundary testing, system prompt extraction, fine-tuning vulnerability assessment, and building a local red team lab.
Ollama for Local Red Teaming
Using Ollama as a local red teaming environment: model selection, running uncensored models, API-based testing, comparing safety across model families, and building a cost-free testing lab.
AI Infrastructure 利用ation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
實驗室: Your First 越獄
Try basic jailbreak techniques against a local model using Ollama, learning the difference between prompt injection and jailbreaking through hands-on experimentation.
實驗室: Setting Up Ollama for Local LLM Testing
Install and configure Ollama for local LLM red teaming, download models, perform basic interactions, and compare safety behavior between local and API-hosted models.
實驗室建置:Ollama、vLLM 與 Docker Compose
AI 紅隊的完整實驗環境建置指南:以 Ollama 與 vLLM 進行本地模型服務、GPU 組態,以及多服務測試環境的 Docker Compose 編排。
Ollama 安全 Testing 導覽
Complete walkthrough for security testing locally-hosted models with Ollama: comparing safety across models, testing system prompt extraction, API security assessment, and 模型file configuration hardening.
Testing Ollama Local Deployments
安全 testing guide for locally deployed models via Ollama including network exposure and API security.
Testing Ollama Local Deployments (Platform 導覽)
Red team testing guide for models deployed locally via Ollama including API endpoints and model management.
工具導覽
必備 AI 紅隊演練工具的端對端實務導覽,涵蓋安裝、設定、執行與結果詮釋。
Local 模型 Analysis and Testing with Ollama
導覽 for using Ollama to run, analyze, and security-test local LLMs, covering model configuration, safety boundary testing, system prompt extraction, fine-tuning vulnerability assessment, and building a local red team lab.
Ollama for Local 紅隊演練
Using Ollama as a local red teaming environment: model selection, running uncensored models, API-based testing, comparing safety across model families, and building a cost-free testing lab.