# ollama
10 articlestagged with “ollama”
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Lab: Your First Jailbreak
Try basic jailbreak techniques against a local model using Ollama, learning the difference between prompt injection and jailbreaking through hands-on experimentation.
Lab: Setting Up Ollama for Local LLM Testing
Install and configure Ollama for local LLM red teaming, download models, perform basic interactions, and compare safety behavior between local and API-hosted models.
Lab Setup: Ollama, vLLM & Docker Compose
Complete lab setup guide for AI red teaming: local model serving with Ollama and vLLM, GPU configuration, Docker Compose for multi-service testing environments.
Ollama Security Testing Walkthrough
Complete walkthrough for security testing locally-hosted models with Ollama: comparing safety across models, testing system prompt extraction, API security assessment, and Modelfile configuration hardening.
Testing Ollama Local Deployments
Security testing guide for locally deployed models via Ollama including network exposure and API security.
Testing Ollama Local Deployments (Platform Walkthrough)
Red team testing guide for models deployed locally via Ollama including API endpoints and model management.
Tool Walkthroughs
End-to-end practical walkthroughs for essential AI red teaming tools, covering installation, configuration, execution, and result interpretation.
Local Model Analysis and Testing with Ollama
Walkthrough for using Ollama to run, analyze, and security-test local LLMs, covering model configuration, safety boundary testing, system prompt extraction, fine-tuning vulnerability assessment, and building a local red team lab.
Ollama for Local Red Teaming
Using Ollama as a local red teaming environment: model selection, running uncensored models, API-based testing, comparing safety across model families, and building a cost-free testing lab.