# azure
標記為「azure」的 64 篇文章
Cloud AI Forensics: Azure
Forensic investigation techniques for Azure AI services including Azure OpenAI, Azure ML, and Cognitive Services with diagnostic logging and evidence collection.
Cloud AI Security Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Cloud AI Security Assessment
Test your knowledge of AWS, Azure, and GCP AI service security with 15 intermediate-level questions covering cloud-specific attack surfaces and misconfigurations.
Capstone: Cloud AI Security Assessment
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform Security (AWS/Azure/GCP)
Security comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Azure ML Exploitation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI Attack Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI Services Security Overview
Red team methodology for Azure AI services including Azure OpenAI, Azure ML, AI Studio, and Cognitive Services: service enumeration, managed identity abuse, and attack surface mapping.
Azure AI Foundry Security Guide
Comprehensive security guide for Azure AI Foundry including model deployment, prompt flow, and content safety.
Azure AI Studio Security Assessment
Security assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure AI Content Safety Testing
Testing Azure AI Content Safety service for bypass vulnerabilities and configuration weaknesses.
Azure OpenAI Security Guide
Security guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Hardening Azure OpenAI Service
Comprehensive hardening guide for Azure OpenAI Service covering network isolation, content filtering, managed identity configuration, and threat detection for GPT and DALL-E deployments.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility Model for Cloud AI Security
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Security Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
Content Safety APIs (Azure, OpenAI, Google)
Detailed comparison of Azure Content Safety, OpenAI Moderation API, and Google Cloud safety offerings, including API structures, category taxonomies, severity levels, testing methodology, and common gaps.
Prompt Shields & Injection Detection
How Azure Prompt Shield and dedicated injection detection models work, their detection patterns based on fine-tuned classifiers, and systematic approaches to bypassing them.
Azure ML Attack Surface
Security assessment of Azure Machine Learning -- managed identity exploitation, workspace security, compute instance attacks, and endpoint vulnerabilities.
Cloud AI Infrastructure Attacks
Security assessment of cloud-hosted AI/ML platforms including AWS SageMaker, Azure ML, and GCP Vertex AI -- IAM misconfigurations, model theft, and data exposure.
Lab: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
Cloud AI Security Cheat Sheet
Quick reference comparing AI security controls across AWS, Azure, and GCP -- covering IAM, networking, encryption, monitoring, and AI-specific services.
Azure ML Security Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI Red Team Walkthrough
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
Azure OpenAI Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Microsoft Semantic Kernel Security Testing
End-to-end walkthrough for security testing Semantic Kernel applications: kernel enumeration, plugin exploitation, planner manipulation, memory and RAG assessment, and Azure integration security review.
Testing Azure OpenAI Service
Red team testing guide for Azure OpenAI including content filtering, managed identity, and network controls.
Integrating PyRIT with Azure OpenAI and Content Safety
Intermediate walkthrough on integrating PyRIT with Azure OpenAI Service and Azure AI Content Safety for enterprise red teaming, including managed identity authentication, content filtering analysis, and compliance reporting.
Cloud AI Forensics: Azure
Forensic investigation techniques for Azure AI services including Azure OpenAI, Azure ML, and Cognitive Services with diagnostic logging and evidence collection.
Cloud AI 安全 Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
進階 Cloud AI 安全 評量
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
紅隊演練雲端 AI 服務:實務指南
在 AWS、Azure 與 GCP 上紅隊演練 AI 服務的實務指南——涵蓋共享責任邊界、服務特定攻擊面與雲端原生安全控制。
Capstone: Cloud AI 安全 評量
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform 安全 (AWS/Azure/GCP)
安全 comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Azure ML 利用ation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI 攻擊 Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI 服務安全概觀
為 Azure AI 服務之紅隊方法論,含 Azure OpenAI、Azure ML、AI Studio 與 Cognitive Services:服務列舉、受管身分濫用與攻擊面對應。
Azure AI Foundry 安全 指南
Comprehensive security guide for Azure AI Foundry including model deployment, prompt flow, and content safety.
Azure AI Studio 安全 評量
安全 assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure AI Content Safety Testing
Testing Azure AI Content Safety service for bypass vulnerabilities and configuration weaknesses.
Azure OpenAI 安全 指南
安全 guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Hardening Azure OpenAI Service
Comprehensive hardening guide for Azure OpenAI Service covering network isolation, content filtering, managed identity configuration, and threat detection for GPT and DALL-E deployments.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility 模型 for Cloud AI 安全
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
雲端 AI 安全
給紅隊員的雲端 AI 安全完整概覽:共同責任模型、跨 AWS、Azure 與 GCP AI 服務的攻擊面、模型 API、資料管線與推論端點的威脅模型。
安全 Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
內容安全 API(Azure、OpenAI、Google)
Azure Content Safety、OpenAI Moderation API 與 Google Cloud 安全服務之詳細比較,含 API 結構、類別分類、嚴重性等級、測試方法論與常見缺口。
Prompt Shield 與注入偵測
Azure Prompt Shield 與專責注入偵測模型如何運作,其基於微調分類器之偵測模式,以及繞過它們之系統化方法。
Azure ML 攻擊 Surface
安全 assessment of Azure Machine Learning -- managed identity exploitation, workspace security, compute instance attacks, and endpoint vulnerabilities.
雲端 AI 基礎設施攻擊
雲端託管 AI/ML 平台的安全評估,包含 AWS SageMaker、Azure ML 與 GCP Vertex AI——IAM 設定錯誤、模型竊取與資料暴露。
實驗室: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
雲端 AI 安全速查表
跨 AWS、Azure 與 GCP 的 AI 安全控制快速參考——涵蓋 IAM、網路、加密、監控與 AI 特定服務。
Azure ML 安全 Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI 紅隊 導覽
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
Azure OpenAI 紅隊 導覽 (Platform 導覽)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
雲端 AI 平台導覽
在主要雲端平台上紅隊演練 AI 系統的動手導覽:AWS Bedrock、Azure OpenAI、Google Vertex AI 與 Hugging Face Hub。
Microsoft Semantic Kernel 安全 Testing
End-to-end walkthrough for security testing Semantic Kernel applications: kernel enumeration, plugin exploitation, planner manipulation, memory and RAG assessment, and Azure integration security review.
Testing Azure OpenAI Service
Red team testing guide for Azure OpenAI including content filtering, managed identity, and network controls.
Integrating PyRIT with Azure OpenAI and Content Safety
中階 walkthrough on integrating PyRIT with Azure OpenAI Service and Azure AI Content Safety for enterprise red teaming, including managed identity authentication, content filtering analysis, and compliance reporting.