# azure
32 articlestagged with “azure”
Cloud AI Forensics: Azure
Forensic investigation techniques for Azure AI services including Azure OpenAI, Azure ML, and Cognitive Services with diagnostic logging and evidence collection.
Cloud AI Security Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Cloud AI Security Assessment
Test your knowledge of AWS, Azure, and GCP AI service security with 15 intermediate-level questions covering cloud-specific attack surfaces and misconfigurations.
Capstone: Cloud AI Security Assessment
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform Security (AWS/Azure/GCP)
Security comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Azure ML Exploitation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI Attack Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI Services Security Overview
Red team methodology for Azure AI services including Azure OpenAI, Azure ML, AI Studio, and Cognitive Services: service enumeration, managed identity abuse, and attack surface mapping.
Azure AI Foundry Security Guide
Comprehensive security guide for Azure AI Foundry including model deployment, prompt flow, and content safety.
Azure AI Studio Security Assessment
Security assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure AI Content Safety Testing
Testing Azure AI Content Safety service for bypass vulnerabilities and configuration weaknesses.
Azure OpenAI Security Guide
Security guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Hardening Azure OpenAI Service
Comprehensive hardening guide for Azure OpenAI Service covering network isolation, content filtering, managed identity configuration, and threat detection for GPT and DALL-E deployments.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility Model for Cloud AI Security
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Security Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
Content Safety APIs (Azure, OpenAI, Google)
Detailed comparison of Azure Content Safety, OpenAI Moderation API, and Google Cloud safety offerings, including API structures, category taxonomies, severity levels, testing methodology, and common gaps.
Prompt Shields & Injection Detection
How Azure Prompt Shield and dedicated injection detection models work, their detection patterns based on fine-tuned classifiers, and systematic approaches to bypassing them.
Azure ML Attack Surface
Security assessment of Azure Machine Learning -- managed identity exploitation, workspace security, compute instance attacks, and endpoint vulnerabilities.
Cloud AI Infrastructure Attacks
Security assessment of cloud-hosted AI/ML platforms including AWS SageMaker, Azure ML, and GCP Vertex AI -- IAM misconfigurations, model theft, and data exposure.
Lab: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
Cloud AI Security Cheat Sheet
Quick reference comparing AI security controls across AWS, Azure, and GCP -- covering IAM, networking, encryption, monitoring, and AI-specific services.
Azure ML Security Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI Red Team Walkthrough
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
Azure OpenAI Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Microsoft Semantic Kernel Security Testing
End-to-end walkthrough for security testing Semantic Kernel applications: kernel enumeration, plugin exploitation, planner manipulation, memory and RAG assessment, and Azure integration security review.
Testing Azure OpenAI Service
Red team testing guide for Azure OpenAI including content filtering, managed identity, and network controls.
Integrating PyRIT with Azure OpenAI and Content Safety
Intermediate walkthrough on integrating PyRIT with Azure OpenAI Service and Azure AI Content Safety for enterprise red teaming, including managed identity authentication, content filtering analysis, and compliance reporting.