# huggingface
標記為「huggingface」的 23 篇文章
smolagents Security Analysis
Security analysis of Hugging Face smolagents including code execution risks and tool trust boundaries.
Hugging Face Inference Endpoints Security
Security analysis of Hugging Face Inference Endpoints including model isolation and API security.
Model Supply Chain Risks
Attack vectors in the AI model supply chain, including malicious model files, pickle exploits, compromised model registries, and dependency vulnerabilities.
AI Supply Chain Exploitation
Methodology for exploiting the AI/ML supply chain: model serialization RCE, dependency confusion, dataset poisoning, CI/CD injection, and container escape.
AI Supply Chain Deep Dive
Deep analysis of AI supply chain security threats including sleeper agents, slopsquatting, malicious model uploads, pickle deserialization exploits, and model provenance verification challenges.
Hugging Face Hub Security
Attack surface analysis of Hugging Face Hub: malicious model uploads, pickle deserialization exploits, model card manipulation, trust signal limitations, gated model bypass, and community-driven trust exploitation.
Model Hub Supply Chain Attack
Attacking the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
Hugging Face Security Audit Walkthrough
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
HuggingFace Spaces Security Testing
End-to-end walkthrough for security testing HuggingFace Spaces applications: Space enumeration, Gradio/Streamlit exploitation, API endpoint testing, secret management review, and model access control assessment.
Hugging Face Hub Red Team Walkthrough
Walkthrough for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Testing Hugging Face Hosted Models
Red team testing guide for models hosted on Hugging Face including Inference API and Spaces.
smolagents 安全 Analysis
安全 analysis of Hugging Face smolagents including code execution risks and tool trust boundaries.
Hugging Face Inference Endpoints 安全
安全 analysis of Hugging Face Inference Endpoints including model isolation and API security.
AI 供應鏈利用
為利用 AI/ML 供應鏈之方法論:模型序列化 RCE、依賴混淆、資料集投毒、CI/CD 注入與容器逃逸。
AI Supply Chain Deep Dive
Deep analysis of AI supply chain security threats including sleeper agents, slopsquatting, malicious model uploads, pickle deserialization exploits, and model provenance verification challenges.
Hugging Face Hub 安全
Hugging Face Hub 之攻擊面分析:惡意模型上傳、pickle 反序列化 exploit、模型卡操弄、信任訊號限制、gated 模型繞過,與社群驅動之信任利用。
模型 Hub Supply Chain 攻擊
攻擊ing the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
Hugging Face 安全 Audit 導覽
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
HuggingFace Spaces 安全 Testing
End-to-end walkthrough for security testing HuggingFace Spaces applications: Space enumeration, Gradio/Streamlit exploitation, API endpoint testing, secret management review, and model access control assessment.
Hugging Face Hub 紅隊 導覽
導覽 for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.
雲端 AI 平台導覽
在主要雲端平台上紅隊演練 AI 系統的動手導覽:AWS Bedrock、Azure OpenAI、Google Vertex AI 與 Hugging Face Hub。
Testing Hugging Face Hosted 模型s
Red team testing guide for models hosted on Hugging Face including Inference API and Spaces.