# vllm
7 articlestagged with “vllm”
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Security Comparison of Model Serving Frameworks
In-depth security analysis of TorchServe, TensorFlow Serving, Triton Inference Server, and vLLM for production AI deployments
vLLM Security Configuration
Security hardening for vLLM serving deployments including API authentication, resource limits, and input validation.
Lab: Inference Server Exploitation
Attack vLLM, TGI, and Triton inference servers to discover information disclosure vulnerabilities, denial-of-service vectors, and configuration weaknesses in model serving infrastructure.
Model Serving Security
Security hardening for model serving infrastructure — covering vLLM, TGI, Triton Inference Server configuration, API security, resource isolation, and deployment best practices.
Lab Setup: Ollama, vLLM & Docker Compose
Complete lab setup guide for AI red teaming: local model serving with Ollama and vLLM, GPU configuration, Docker Compose for multi-service testing environments.
Testing vLLM Inference Deployments
Red team testing guide for models served via vLLM including batching, KV cache, and speculative decoding.