# gpu
標記為「gpu」的 22 篇文章
Attacking AI Deployments
Security assessment of AI deployment infrastructure, including container escapes, GPU side channels, inference server vulnerabilities, and resource exhaustion attacks.
Attacking GPU Compute Clusters
Expert-level analysis of attacks against GPU compute clusters used for ML training and inference, including side-channel attacks on GPU memory, CUDA runtime exploitation, multi-tenant isolation failures, and RDMA network attacks.
GPU Cluster Security
Securing GPU clusters used for model training and inference against unauthorized access and data leakage.
GPU Memory Side-Channel Attacks
Side-channel attacks exploiting GPU memory allocation, timing, and electromagnetic emanation to extract sensitive data from AI workloads.
GPU Sharing and Isolation Security
Security implications of GPU sharing in multi-tenant AI infrastructure and isolation strategies.
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Kubernetes Security for ML Workloads
Comprehensive analysis of Kubernetes attack surfaces specific to machine learning workloads, including GPU operator exploitation, model serving namespace attacks, and cluster-level privilege escalation through ML components.
Lab: GPU Side-Channel Attacks
Demonstrate information leakage through GPU memory residuals and timing side channels, showing how shared GPU infrastructure can expose sensitive model data.
GPU Security for AI
GPU security risks in AI workloads — covering memory isolation failures, side-channel attacks, multi-tenant GPU risks, GPU firmware vulnerabilities, and secure GPU configuration.
GPU Side Channel Basics
GPU-based side channel attacks on ML inference, exploiting timing, power consumption, and memory access patterns to extract information about models and data.
RunPod Serverless GPU Endpoint Testing
End-to-end walkthrough for security testing RunPod serverless GPU endpoints: endpoint enumeration, handler exploitation, webhook security, Docker template assessment, and cost abuse prevention.
攻擊 AI 部署
AI 部署基礎設施的安全評估,包括容器逃逸、GPU 側通道、推論伺服器漏洞以及資源耗盡攻擊。
攻擊ing GPU Compute Clusters
專家-level analysis of attacks against GPU compute clusters used for ML training and inference, including side-channel attacks on GPU memory, CUDA runtime exploitation, multi-tenant isolation failures, and RDMA network attacks.
GPU Cluster 安全
Securing GPU clusters used for model training and inference against unauthorized access and data leakage.
GPU 記憶體 Side-Channel 攻擊s
Side-channel attacks exploiting GPU memory allocation, timing, and electromagnetic emanation to extract sensitive data from AI workloads.
GPU Sharing and Isolation 安全
安全 implications of GPU sharing in multi-tenant AI infrastructure and isolation strategies.
AI Infrastructure 利用ation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Kubernetes 安全 for ML Workloads
Comprehensive analysis of Kubernetes attack surfaces specific to machine learning workloads, including GPU operator exploitation, model serving namespace attacks, and cluster-level privilege escalation through ML components.
實驗室: GPU Side-Channel 攻擊s
Demonstrate information leakage through GPU memory residuals and timing side channels, showing how shared GPU infrastructure can expose sensitive model data.
AI 的 GPU 安全
AI 工作負載中的 GPU 安全風險——涵蓋記憶體隔離失敗、側通道攻擊、多租戶 GPU 風險、GPU 韌體漏洞與安全 GPU 設定。
GPU Side Channel Basics
GPU-based side channel attacks on ML inference, exploiting timing, power consumption, and memory access patterns to extract information about models and data.
RunPod Serverless GPU Endpoint Testing
End-to-end walkthrough for security testing RunPod serverless GPU endpoints: endpoint enumeration, handler exploitation, webhook security, Docker template assessment, and cost abuse prevention.