Cloud AI Forensics: Azure
Forensic investigation techniques for Azure AI services including Azure OpenAI, Azure ML, and Cognitive Services with diagnostic logging and evidence collection.
Overview
Microsoft Azure hosts AI workloads through Azure OpenAI Service (managed GPT and DALL-E models), Azure Machine Learning (custom model training and deployment), and Cognitive Services (pre-built AI models for vision, speech, language, and decision). Azure's AI services are tightly integrated with the broader Azure ecosystem: Entra ID for authentication, Azure Monitor for observability, Key Vault for secrets management, and Log Analytics for centralized log analysis.
This integration is a forensic advantage. Unlike standalone AI deployments where logging must be explicitly configured for each component, Azure's diagnostic logging infrastructure provides a unified framework for capturing AI service activity. When properly configured, Azure can log every Azure OpenAI API call with full prompt and response content, every Azure ML experiment and deployment action, and every Cognitive Services request with input/output data.
The forensic challenge on Azure is that this logging is not enabled by default for most AI services. Organizations that have not configured diagnostic settings before an incident will find critical evidence missing. Additionally, Azure's role-based access control (RBAC) model means that the investigator must understand both the data plane (who called the AI API) and the control plane (who configured the AI service) to build a complete picture.
This article covers forensic artifact collection across Azure's AI services, analysis techniques for common incident scenarios, and forensic readiness configuration to ensure evidence is available when needed.
Azure AI Forensic Artifact Sources
Azure Activity Log and Diagnostic Logs
Azure's logging architecture has two primary layers. The Activity Log captures control plane operations (creating, modifying, or deleting resources). Diagnostic Logs capture data plane operations (actual API calls to the AI services). Both are essential for AI forensics.
from azure.identity import DefaultAzureCredential
from azure.mgmt.monitor import MonitorManagementClient
from azure.monitor.query import LogsQueryClient, LogsQueryStatus
from datetime import datetime, timedelta, timezone
from dataclasses import dataclass, field
from typing import Optional
import json
@dataclass
class AzureAIForensicEvent:
"""A parsed Azure AI forensic event."""
timestamp: str
operation_name: str
resource_type: str
resource_name: str
caller_identity: str
caller_ip: str
status: str
properties: dict
correlation_id: str = ""
category: str = ""
class AzureAIForensicCollector:
"""Collect and analyze forensic artifacts from Azure AI services."""
def __init__(
self,
subscription_id: str,
resource_group: str,
credential: Optional[DefaultAzureCredential] = None,
):
self.subscription_id = subscription_id
self.resource_group = resource_group
self.credential = credential or DefaultAzureCredential()
self.monitor_client = MonitorManagementClient(
self.credential, subscription_id
)
self.logs_client = LogsQueryClient(self.credential)
def collect_activity_log_events(
self,
start_time: datetime,
end_time: datetime,
resource_types: Optional[list[str]] = None,
) -> list[AzureAIForensicEvent]:
"""
Collect Azure Activity Log events for AI resources.
Args:
start_time: Start of investigation window.
end_time: End of investigation window.
resource_types: Azure resource types to filter.
Defaults to common AI resource types.
Returns:
List of parsed forensic events.
"""
if resource_types is None:
resource_types = [
"Microsoft.CognitiveServices/accounts",
"Microsoft.MachineLearningServices/workspaces",
]
# Build OData filter for the Activity Log query
time_filter = (
f"eventTimestamp ge '{start_time.isoformat()}' "
f"and eventTimestamp le '{end_time.isoformat()}'"
)
rg_filter = f"resourceGroupName eq '{self.resource_group}'"
odata_filter = f"{time_filter} and {rg_filter}"
events = []
try:
activity_logs = self.monitor_client.activity_logs.list(
filter=odata_filter
)
for log in activity_logs:
# Filter by resource type
resource_type = getattr(log, "resource_type", {})
rt_value = getattr(resource_type, "value", "") if resource_type else ""
if resource_types and rt_value not in resource_types:
continue
caller = getattr(log, "caller", "unknown")
claims = getattr(log, "claims", {}) or {}
ip_address = claims.get(
"ipaddr",
claims.get("http://schemas.microsoft.com/claims/authnclassreference", "unknown"),
)
status_obj = getattr(log, "status", None)
status_value = getattr(status_obj, "value", "unknown") if status_obj else "unknown"
events.append(AzureAIForensicEvent(
timestamp=str(getattr(log, "event_timestamp", "")),
operation_name=getattr(
getattr(log, "operation_name", None), "value", "unknown"
),
resource_type=rt_value,
resource_name=getattr(log, "resource_id", "unknown"),
caller_identity=caller,
caller_ip=str(ip_address),
status=status_value,
properties=dict(getattr(log, "properties", {}) or {}),
correlation_id=str(getattr(log, "correlation_id", "")),
category=str(getattr(
getattr(log, "category", None), "value", ""
)),
))
except Exception as e:
print(f"Error collecting activity logs: {e}")
return events
def query_log_analytics(
self,
workspace_id: str,
query: str,
timespan: Optional[timedelta] = None,
) -> list[dict]:
"""
Execute a KQL query against a Log Analytics workspace.
Args:
workspace_id: The Log Analytics workspace ID.
query: KQL query string.
timespan: Time range for the query.
Returns:
List of result rows as dicts.
"""
if timespan is None:
timespan = timedelta(days=7)
try:
response = self.logs_client.query_workspace(
workspace_id=workspace_id,
query=query,
timespan=timespan,
)
if response.status == LogsQueryStatus.SUCCESS:
results = []
for table in response.tables:
columns = [col.name for col in table.columns]
for row in table.rows:
results.append(dict(zip(columns, row)))
return results
else:
return [{"error": f"Query failed: {response.status}"}]
except Exception as e:
return [{"error": str(e)}]Azure OpenAI Service Forensics
Azure OpenAI Service is the most commonly investigated Azure AI service because it processes natural language prompts that may contain sensitive data, and its outputs can reveal successful attacks. When diagnostic logging is enabled, Azure OpenAI logs full request and response content to Log Analytics.
class AzureOpenAIForensicAnalyzer:
"""Forensic analysis specific to Azure OpenAI Service."""
def __init__(self, collector: AzureAIForensicCollector):
self.collector = collector
def investigate_api_usage(
self,
workspace_id: str,
resource_name: str,
start_time: datetime,
end_time: datetime,
) -> dict:
"""
Investigate Azure OpenAI API usage patterns.
Args:
workspace_id: Log Analytics workspace ID.
resource_name: Azure OpenAI resource name.
start_time: Start of investigation window.
end_time: End of investigation window.
Returns:
Investigation results dict.
"""
results = {
"resource": resource_name,
"period": {
"start": start_time.isoformat(),
"end": end_time.isoformat(),
},
"usage_summary": {},
"suspicious_requests": [],
"content_policy_violations": [],
"error_analysis": {},
}
# Query usage summary by model and caller
usage_query = f"""
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Resource =~ "{resource_name}"
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where OperationName == "ChatCompletions_Create"
or OperationName == "Completions_Create"
or OperationName == "Embeddings_Create"
| summarize
RequestCount = count(),
TotalTokens = sum(toint(properties_s)),
AvgDurationMs = avg(DurationMs)
by OperationName, CallerIPAddress, identity_claim_appid_s
| order by RequestCount desc
"""
timespan = end_time - start_time
usage_data = self.collector.query_log_analytics(
workspace_id, usage_query, timespan
)
results["usage_summary"] = usage_data
# Query for content policy violations (HTTP 400 with
# content_filter reasons)
violation_query = f"""
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Resource =~ "{resource_name}"
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where ResultSignature == "400"
| where properties_s contains "content_filter"
| project
TimeGenerated,
OperationName,
CallerIPAddress,
ResultSignature,
properties_s,
identity_claim_appid_s
| order by TimeGenerated asc
"""
violations = self.collector.query_log_analytics(
workspace_id, violation_query, timespan
)
results["content_policy_violations"] = violations
# Query for request patterns that suggest prompt injection
# or jailbreak attempts (high error rates followed by successes)
pattern_query = f"""
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Resource =~ "{resource_name}"
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| summarize
TotalRequests = count(),
FailedRequests = countif(ResultSignature != "200"),
SuccessAfterFailure = countif(
ResultSignature == "200"
and prev(ResultSignature) != "200"
)
by bin(TimeGenerated, 5m), CallerIPAddress
| where FailedRequests > 5
| order by TimeGenerated asc
"""
pattern_data = self.collector.query_log_analytics(
workspace_id, pattern_query, timespan
)
results["suspicious_requests"] = pattern_data
return results
def collect_prompt_response_logs(
self,
workspace_id: str,
resource_name: str,
start_time: datetime,
end_time: datetime,
caller_ip: Optional[str] = None,
max_results: int = 1000,
) -> list[dict]:
"""
Collect full prompt and response logs from Azure OpenAI.
Requires that diagnostic logging with request/response
body logging is enabled.
Args:
workspace_id: Log Analytics workspace ID.
resource_name: Azure OpenAI resource name.
start_time: Start of collection window.
end_time: End of collection window.
caller_ip: Optional filter for a specific caller IP.
max_results: Maximum number of results.
Returns:
List of prompt/response log entries.
"""
ip_filter = ""
if caller_ip:
ip_filter = f'| where CallerIPAddress == "{caller_ip}"'
query = f"""
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where Resource =~ "{resource_name}"
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where OperationName == "ChatCompletions_Create"
{ip_filter}
| project
TimeGenerated,
OperationName,
CallerIPAddress,
DurationMs,
ResultSignature,
properties_s,
identity_claim_appid_s
| order by TimeGenerated asc
| take {max_results}
"""
timespan = end_time - start_time
return self.collector.query_log_analytics(
workspace_id, query, timespan
)Azure Machine Learning Forensics
Azure ML workspaces generate forensic artifacts through experiments, model registrations, endpoint deployments, and compute operations. The workspace's Activity Log captures control plane actions, while the workspace's diagnostic logs capture data plane operations.
class AzureMLForensicAnalyzer:
"""Forensic analysis specific to Azure Machine Learning."""
def __init__(self, collector: AzureAIForensicCollector):
self.collector = collector
def investigate_workspace_activity(
self,
workspace_id: str,
log_analytics_workspace_id: str,
start_time: datetime,
end_time: datetime,
) -> dict:
"""
Investigate activity in an Azure ML workspace.
Args:
workspace_id: The Azure ML workspace resource ID.
log_analytics_workspace_id: Log Analytics workspace ID.
start_time: Start of investigation window.
end_time: End of investigation window.
Returns:
Investigation results dict.
"""
results = {
"workspace_id": workspace_id,
"model_registry_changes": [],
"endpoint_modifications": [],
"compute_activity": [],
"data_access_events": [],
}
timespan = end_time - start_time
# Check model registry for modifications
model_query = f"""
AmlComputeClusterEvent
| union AmlRunStatusChangedEvent
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| project TimeGenerated, OperationName, Identity, Properties=properties_s
| order by TimeGenerated asc
"""
results["compute_activity"] = self.collector.query_log_analytics(
log_analytics_workspace_id, model_query, timespan
)
# Check for model registration and deployment events
deployment_query = f"""
AzureActivity
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where ResourceProvider == "MICROSOFT.MACHINELEARNINGSERVICES"
| where OperationNameValue contains "model"
or OperationNameValue contains "endpoint"
or OperationNameValue contains "deployment"
| project
TimeGenerated,
OperationNameValue,
Caller,
CallerIpAddress,
ActivityStatusValue,
Properties
| order by TimeGenerated asc
"""
deployment_data = self.collector.query_log_analytics(
log_analytics_workspace_id, deployment_query, timespan
)
results["endpoint_modifications"] = deployment_data
return results
def check_model_registry_integrity(
self,
workspace_id: str,
log_analytics_workspace_id: str,
model_name: str,
start_time: datetime,
end_time: datetime,
) -> dict:
"""
Check the integrity of a model in the Azure ML registry
by examining its version history and access patterns.
Args:
workspace_id: Azure ML workspace resource ID.
log_analytics_workspace_id: Log Analytics workspace ID.
model_name: Name of the model to investigate.
start_time: Start of investigation window.
end_time: End of investigation window.
Returns:
Model integrity assessment dict.
"""
timespan = end_time - start_time
query = f"""
AzureActivity
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where ResourceProvider == "MICROSOFT.MACHINELEARNINGSERVICES"
| where Properties contains "{model_name}"
| project
TimeGenerated,
OperationNameValue,
Caller,
CallerIpAddress,
ActivityStatusValue,
Properties
| order by TimeGenerated asc
"""
events = self.collector.query_log_analytics(
log_analytics_workspace_id, query, timespan
)
findings = []
for event in events:
op_name = event.get("OperationNameValue", "")
if "write" in op_name.lower() or "delete" in op_name.lower():
findings.append({
"type": "model_modification",
"severity": "high",
"operation": op_name,
"timestamp": str(event.get("TimeGenerated")),
"caller": event.get("Caller"),
"ip": event.get("CallerIpAddress"),
})
return {
"model_name": model_name,
"events_found": len(events),
"modification_events": len(findings),
"findings": findings,
}Entra ID Integration for Identity Forensics
Azure AI services authenticate through Entra ID (formerly Azure Active Directory), which provides a rich audit trail of authentication events. For forensic investigations, Entra ID sign-in logs reveal who authenticated, from what device and location, at what time, and whether any conditional access policies were applied or bypassed.
When investigating unauthorized access to Azure AI services, correlate the AI service diagnostic logs with Entra ID sign-in logs. The application ID (appId claim) in the AI service logs matches the application registration in Entra ID. Query the sign-in logs for that application to identify all authentication events, including failed attempts that may indicate credential stuffing or brute force attacks.
Entra ID also captures service principal sign-in logs, which are critical for investigating compromised service accounts. Many Azure AI deployments use service principals for automated access. If a service principal's credentials are compromised, the attacker gains the same access as the automated system. Service principal sign-in logs show the IP addresses and timestamps of all authentication events for the principal.
def query_entra_signin_logs(
collector: AzureAIForensicCollector,
workspace_id: str,
app_id: str,
start_time: datetime,
end_time: datetime,
) -> list[dict]:
"""
Query Entra ID sign-in logs for a specific application.
Args:
collector: The forensic collector instance.
workspace_id: Log Analytics workspace ID.
app_id: The application (client) ID to investigate.
start_time: Start of investigation window.
end_time: End of investigation window.
Returns:
List of sign-in events.
"""
timespan = end_time - start_time
query = f"""
SigninLogs
| where TimeGenerated between (
datetime({start_time.strftime('%Y-%m-%dT%H:%M:%S')})
.. datetime({end_time.strftime('%Y-%m-%dT%H:%M:%S')})
)
| where AppId == "{app_id}"
| project
TimeGenerated,
UserPrincipalName,
AppDisplayName,
IPAddress,
Location,
ResultType,
ResultDescription,
ClientAppUsed,
ConditionalAccessStatus,
RiskLevelDuringSignIn,
DeviceDetail
| order by TimeGenerated asc
"""
return collector.query_log_analytics(workspace_id, query, timespan)Network-Level Forensics for Azure AI
Azure Network Watcher and NSG Flow Logs provide network-level visibility into AI service traffic. For Azure ML compute instances and endpoints deployed within a VNet, NSG Flow Logs record all inbound and outbound network connections. This data is valuable for detecting data exfiltration (large outbound transfers to unexpected destinations), unauthorized access attempts (inbound connections from unexpected sources), and lateral movement (connections between AI resources and other internal systems that should not be communicating).
Enable NSG Flow Logs version 2, which includes byte counts and flow state information in addition to the basic five-tuple (source IP, destination IP, source port, destination port, protocol). Send flow logs to a Storage Account and Log Analytics workspace for analysis. For real-time detection, configure Traffic Analytics, which provides aggregated flow analysis and anomaly detection.
Investigating Common Azure AI Incident Scenarios
Key Vault Forensics for AI API Keys
Many Azure AI deployments store API keys and connection strings in Azure Key Vault. When investigating potential credential compromise, Key Vault audit logs are essential. Key Vault diagnostic logs record every secret read, write, and delete operation, along with the caller identity and IP address.
Query Key Vault logs to determine: when AI service keys were last accessed, which identities accessed them, whether any secrets were created or modified unexpectedly, and whether secret backup or restore operations were performed (which could indicate an attacker exporting keys for use outside the environment).
Enable Key Vault's purge protection and soft delete features to prevent permanent deletion of secrets during an incident. An attacker who gains Key Vault access might attempt to delete secrets to cover their tracks. With soft delete enabled, deleted secrets remain recoverable for the configured retention period.
Scenario 1: Azure OpenAI Content Policy Bypass
An attacker discovers a way to bypass Azure OpenAI's content filtering to generate prohibited content. Investigation steps:
- Query diagnostic logs for content filter violations (HTTP 400 responses with content_filter error codes) to see the attack attempts.
- Identify successful requests from the same caller IP or application ID that follow the failed attempts, as these may represent successful bypasses.
- Collect the full prompt/response content from successful requests to determine what was generated.
- Correlate the application ID with Entra ID to identify the application and its owners.
- Check whether the content filtering configuration was modified: query Activity Log for Write operations on the Azure OpenAI resource's content policy settings.
Scenario 2: Unauthorized Azure ML Model Deployment
An attacker deploys a malicious model to an Azure ML endpoint, replacing the legitimate model. Investigation steps:
- Query Azure Activity Log for endpoint Write operations (model deployment, endpoint configuration changes).
- Identify the caller identity and verify whether the deployment was authorized through your change management process.
- Check the model registry version history to determine if a new model version was registered or if an existing version was modified.
- Compare the deployed model hash against the expected hash from your CI/CD pipeline.
- Collect Azure ML compute logs to check for any unauthorized training jobs that may have produced the malicious model.
Scenario 3: Data Exfiltration Through Cognitive Services
An insider uses Cognitive Services to process and exfiltrate sensitive documents (e.g., using Document Intelligence to OCR confidential documents and send the text to an external endpoint). Investigate by collecting diagnostic logs for the Cognitive Services resource, identifying the caller and the volume of processed documents, and cross-referencing with network logs to determine where the processed output was sent.
Forensic Readiness Configuration for Azure AI
Essential Diagnostic Settings
Configure the following diagnostic settings before an incident occurs:
def configure_azure_ai_forensic_logging(
subscription_id: str,
resource_group: str,
resource_name: str,
resource_type: str,
log_analytics_workspace_id: str,
storage_account_id: str,
) -> dict:
"""
Configure diagnostic settings for an Azure AI resource
to ensure forensic readiness.
Args:
subscription_id: Azure subscription ID.
resource_group: Resource group name.
resource_name: AI resource name.
resource_type: Resource type (e.g., "Microsoft.CognitiveServices/accounts").
log_analytics_workspace_id: Log Analytics workspace resource ID.
storage_account_id: Storage account for long-term log retention.
Returns:
Configuration result dict.
"""
credential = DefaultAzureCredential()
monitor = MonitorManagementClient(credential, subscription_id)
resource_id = (
f"/subscriptions/{subscription_id}"
f"/resourceGroups/{resource_group}"
f"/providers/{resource_type}/{resource_name}"
)
# Define diagnostic setting with all relevant log categories
diagnostic_settings = {
"storage_account_id": storage_account_id,
"workspace_id": log_analytics_workspace_id,
"logs": [
{
"category": "Audit",
"enabled": True,
"retention_policy": {
"enabled": True,
"days": 365,
},
},
{
"category": "RequestResponse",
"enabled": True,
"retention_policy": {
"enabled": True,
"days": 90,
},
},
{
"category": "Trace",
"enabled": True,
"retention_policy": {
"enabled": True,
"days": 90,
},
},
],
"metrics": [
{
"category": "AllMetrics",
"enabled": True,
"retention_policy": {
"enabled": True,
"days": 90,
},
},
],
}
try:
result = monitor.diagnostic_settings.create_or_update(
resource_uri=resource_id,
name=f"{resource_name}-forensic-logging",
parameters=diagnostic_settings,
)
return {
"status": "configured",
"resource": resource_name,
"setting_name": f"{resource_name}-forensic-logging",
}
except Exception as e:
return {"status": "error", "error": str(e)}Evidence Preservation on Azure
When preserving evidence from Azure AI services, leverage Azure's built-in capabilities for immutable storage and long-term retention. Configure immutable blob storage policies on the storage accounts used for diagnostic log archival. This prevents deletion or modification of log data even by administrators, providing a tamper-resistant evidence store.
For Azure OpenAI investigations, the most critical evidence is the prompt and response content stored in Log Analytics. Log Analytics has a default retention of 30 days for most tiers, which is insufficient for forensic investigations. Configure data export rules to continuously export AI-related log data to a Storage Account with immutable storage enabled and a retention period of at least one year.
For Azure ML workspace investigations, preserve the workspace's run history, model registry state, and compute logs. Use the Azure ML SDK to programmatically export run details, model version metadata, and environment configurations. Store these exports alongside the activity logs for a complete evidence package.
When working in regulated industries, consider using Azure Confidential Computing resources for forensic analysis to ensure that evidence processing meets data protection requirements. Customer Lockbox can be enabled to require explicit approval before Microsoft support can access your AI resources, which is relevant when the investigation involves potential insider threats.
Cross-Service Correlation on Azure
Azure's correlation ID system provides a powerful mechanism for tracking requests across service boundaries. Every operation in Azure generates a correlation ID that links related events across Activity Log, diagnostic logs, and Entra ID sign-in logs. When investigating a chain of actions (e.g., an attacker authenticates through Entra ID, accesses Key Vault to retrieve an API key, then calls Azure OpenAI), the correlation ID connects these events even though they span different services.
Use Log Analytics workbook queries that join across log tables using correlation IDs to build a unified timeline. The Activity Log table (AzureActivity), Entra ID sign-in logs (SigninLogs), Key Vault diagnostic logs (AzureDiagnostics where ResourceProvider == "MICROSOFT.KEYVAULT"), and Cognitive Services diagnostic logs can all be correlated this way.
Additionally, Azure Sentinel (now Microsoft Sentinel) can automatically correlate these signals using its built-in UEBA (User and Entity Behavior Analytics) capabilities. If your organization uses Sentinel, configure data connectors for all AI-related services so that AI security incidents benefit from Sentinel's correlation and investigation tools.
Key Recommendations
- Enable diagnostic logging on all Azure OpenAI resources with RequestResponse logs sent to both Log Analytics (for real-time queries) and a Storage Account (for long-term retention).
- Configure Azure ML workspace diagnostic settings to capture AmlComputeClusterEvent, AmlRunStatusChangedEvent, and AmlEnvironmentEvent categories.
- Enable Azure Entra ID sign-in and audit logs to correlate AI service access with authentication events.
- Set Log Analytics retention to at least 90 days for operational data and archive to storage for longer periods.
- Configure Azure Alerts on high-risk operations: model deployments, content policy modifications, and unusual API call volumes.
- Use Managed Identities instead of API keys where possible. Managed identities create richer audit trails in Entra ID.
Microsoft Defender for Cloud Integration
Microsoft Defender for Cloud provides AI workload protection capabilities that complement manual forensic investigation. Defender for Cloud monitors Azure resources for security misconfigurations, detects threats in near-real-time, and generates actionable security recommendations. For Azure AI services, Defender can detect exposed API keys, misconfigured network access (AI services accessible from the public internet without restrictions), and anomalous service usage patterns.
When investigating an Azure AI incident, check Defender for Cloud alerts for the affected resources. Defender's alert timeline may reveal precursor activity that was detected but not acted upon, such as failed authentication attempts or reconnaissance-pattern API calls that preceded the main attack. Defender's attack path analysis can show how the attacker could have moved from an initial foothold to the AI service, which is valuable for identifying the root cause.
Configure Defender for Cloud's continuous export to send security alerts to a Log Analytics workspace or Event Hub. This ensures that Defender findings are available for forensic analysis alongside the diagnostic logs from AI services, enabling correlation between security alerts and AI service activity in a single query environment.
References
- Microsoft (2025). "Monitor Azure OpenAI Service." https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/monitoring
- Microsoft (2025). "Monitor Azure Machine Learning." https://learn.microsoft.com/en-us/azure/machine-learning/monitor-azure-machine-learning
- Microsoft (2025). "Azure diagnostic settings." https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings