Service Mesh 安全 for AI Microservices
Securing inter-service communication in AI systems using Istio, Linkerd, and Envoy with focus on inference pipelines and model serving architectures
概覽
Modern AI systems are increasingly deployed as microservice architectures where individual components — model servers, feature stores, preprocessing services, postprocessing logic, routing layers, and 監控 collectors — communicate over the network. A single 推論 request might traverse a chain of services: an API gateway authenticates the request, a preprocessor normalizes the 輸入, a router selects the appropriate model version, 模型 server performs 推論, a postprocessor formats the 輸出, and a logging service records the interaction. Each network hop is a potential point of interception, manipulation, or failure.
Service meshes such as Istio, Linkerd, and raw Envoy proxy configurations provide a transparent infrastructure layer that handles mutual TLS (mTLS) encryption, 認證, 授權, traffic management, and observability for service-to-service communication. For AI deployments, a properly configured service mesh can enforce that only authorized services communicate with model servers, encrypt 推論 data in transit, provide fine-grained access control for model management operations, and detect anomalous traffic patterns that might indicate model extraction attempts.
然而, AI workloads introduce unique challenges for service mesh deployments. The latency overhead of sidecar proxies can be unacceptable for real-time 推論 endpoints where milliseconds matter. GPU-accelerated services may use non-HTTP protocols (gRPC with large binary payloads, custom TCP for model loading) that require careful mesh configuration. High-throughput 推論 pipelines generate enormous volumes of telemetry data that can overwhelm mesh observability systems. And the performance sensitivity of AI workloads often leads operators to create mesh exceptions and bypasses that undermine the 安全 benefits.
This article examines service mesh 安全 in the specific context of AI microservices, covering both the defensive value of meshes and the 攻擊面 they introduce when deployed alongside ML workloads.
Service Mesh Architecture for AI Systems
AI Inference Pipeline Topology
A typical AI 推論 pipeline deployed with a service mesh looks like this:
Client -> [Ingress Gateway] -> [Preprocessor] -> [Model Router]
|
+--------+--------+--------+
| | |
[Model v1] [Model v2] [Ensemble Controller]
| | |
+--------+--------+--------+
|
[Postprocessor] -> [Response Cache] -> Client
Each arrow represents a network call that the service mesh intercepts through sidecar proxies (Envoy in Istio, linkerd2-proxy in Linkerd). The mesh provides:
- mTLS between all services: Prevents 推論 data interception.
- Authorization policies: Controls which services can call which (e.g., only the router can call model servers).
- Traffic management: Canary deployments for new model versions, A/B 測試, circuit breaking for failing models.
- Observability: Request tracing across the 推論 chain, latency metrics per hop, error rates.
Sidecar Injection and AI Workloads
The service mesh sidecar proxy runs alongside each application container in a pod. For AI workloads, this creates specific considerations:
"""
Audit service mesh sidecar deployment in AI namespaces.
Identifies pods without sidecars, misconfigured injection,
and performance-related bypasses that create 安全 gaps.
"""
import subprocess
import json
from typing import Any
class ServiceMeshAIAuditor:
"""Audit service mesh configuration for AI workloads."""
def __init__(self, namespace: str = "ai-推論"):
self.namespace = namespace
self.findings: list[dict] = []
def _kubectl_json(self, *args: str) -> dict[str, Any]:
"""Execute kubectl and return parsed JSON."""
cmd = ["kubectl", "-n", self.namespace] + list(args) + ["-o", "json"]
result = subprocess.run(
cmd, capture_output=True, text=True, timeout=30,
)
if result.returncode != 0:
return {}
return json.loads(result.stdout)
def check_sidecar_coverage(self) -> None:
"""
Verify all AI pods have service mesh sidecars injected.
Missing sidecars mean unencrypted, unauthorized communication.
"""
pods = self._kubectl_json("get", "pods")
for pod in pods.get("items", []):
name = pod["metadata"]["name"]
containers = [
c["name"]
for c in pod.get("spec", {}).get("containers", [])
]
# Check for known sidecar container names
sidecar_names = {"istio-proxy", "linkerd-proxy", "envoy-sidecar"}
has_sidecar = bool(sidecar_names & set(containers))
# Check for sidecar injection annotation
annotations = pod.get("metadata", {}).get("annotations", {})
injection_disabled = (
annotations.get("sidecar.istio.io/inject") == "false"
or annotations.get("linkerd.io/inject") == "disabled"
)
if not has_sidecar:
severity = "HIGH"
detail = (
f"Pod {name} has no service mesh sidecar. "
f"Containers: {containers}. "
)
if injection_disabled:
detail += (
"Injection is explicitly disabled via annotation. "
"This was likely intentional but creates a 安全 gap."
)
severity = "CRITICAL"
else:
detail += (
"Injection may have failed or the namespace is not "
"labeled for injection."
)
# Check if 這是 a GPU/model-serving pod
for container in pod.get("spec", {}).get("containers", []):
limits = container.get("resources", {}).get("limits", {})
if any("gpu" in k.lower() for k in limits):
detail += " THIS IS A GPU POD — model 推論 "
detail += "traffic is unprotected."
severity = "CRITICAL"
self.findings.append({
"severity": severity,
"title": f"Missing sidecar: {name}",
"detail": detail,
})
def check_mtls_policy(self) -> None:
"""Verify mTLS is enforced in STRICT mode."""
# Check Istio PeerAuthentication
result = subprocess.run(
[
"kubectl", "get", "peerauthentication",
"-n", self.namespace, "-o", "json",
],
capture_output=True, text=True, timeout=30,
)
if result.returncode == 0:
policies = json.loads(result.stdout)
has_strict = False
for policy in policies.get("items", []):
mode = (
policy.get("spec", {})
.get("mtls", {})
.get("mode", "UNSET")
)
name = policy["metadata"]["name"]
if mode == "STRICT":
has_strict = True
elif mode == "PERMISSIVE":
self.findings.append({
"severity": "HIGH",
"title": f"PERMISSIVE mTLS: {name}",
"detail": (
"PERMISSIVE mode allows plaintext connections. "
"攻擊者 in the network can intercept AI "
"推論 requests and responses without TLS."
),
})
elif mode == "DISABLE":
self.findings.append({
"severity": "CRITICAL",
"title": f"mTLS DISABLED: {name}",
"detail": (
"mTLS is explicitly disabled. All service-to-"
"service communication is unencrypted."
),
})
# Check for port-level exceptions
port_mtls = (
policy.get("spec", {}).get("portLevelMtls", {})
)
for port, config in port_mtls.items():
if config.get("mode") != "STRICT":
self.findings.append({
"severity": "HIGH",
"title": (
f"mTLS exception on port {port}: {name}"
),
"detail": (
f"Port {port} has mTLS mode "
f"{config.get('mode', 'UNSET')}. "
f"If 這是 a model serving port, "
f"推論 traffic is unprotected."
),
})
if not has_strict:
self.findings.append({
"severity": "HIGH",
"title": "No STRICT mTLS policy in namespace",
"detail": (
"Without a STRICT PeerAuthentication policy, "
"services can accept plaintext connections."
),
})
def check_authorization_policies(self) -> None:
"""
Verify that AuthorizationPolicies restrict which services
can communicate in the AI 推論 pipeline.
"""
result = subprocess.run(
[
"kubectl", "get", "authorizationpolicies",
"-n", self.namespace, "-o", "json",
],
capture_output=True, text=True, timeout=30,
)
if result.returncode != 0:
self.findings.append({
"severity": "HIGH",
"title": "No AuthorizationPolicies found",
"detail": (
"Without 授權 policies, any service in the "
"mesh can call any other service. 這意味著 a "
"compromised preprocessor can directly access model "
"servers, bypassing the routing layer."
),
})
return
policies = json.loads(result.stdout)
if not policies.get("items"):
self.findings.append({
"severity": "HIGH",
"title": "No AuthorizationPolicies in namespace",
"detail": (
"No 授權 policies restrict service-to-service "
"communication. 實作 least-privilege policies."
),
})
return
# Check for overly broad ALLOW policies
for policy in policies.get("items", []):
name = policy["metadata"]["name"]
rules = policy.get("spec", {}).get("rules", [])
for rule in rules:
from_sources = rule.get("from", [])
if not from_sources:
self.findings.append({
"severity": "MEDIUM",
"title": f"No source restriction: {name}",
"detail": (
"Rule has no 'from' clause, allowing any "
"source to match. 考慮 restricting to "
"specific service accounts."
),
})
def check_envoy_filter_injection(self) -> None:
"""
Check for EnvoyFilter resources that modify proxy behavior.
These can be used to bypass 安全 policies.
"""
result = subprocess.run(
[
"kubectl", "get", "envoyfilters",
"-n", self.namespace, "-o", "json",
],
capture_output=True, text=True, timeout=30,
)
if result.returncode == 0:
filters = json.loads(result.stdout)
for ef in filters.get("items", []):
name = ef["metadata"]["name"]
patches = (
ef.get("spec", {}).get("configPatches", [])
)
for patch in patches:
context = patch.get("match", {}).get("context", "")
operation = patch.get("patch", {}).get("operation", "")
self.findings.append({
"severity": "MEDIUM",
"title": f"EnvoyFilter found: {name}",
"detail": (
f"Context: {context}, Operation: {operation}. "
f"EnvoyFilters can modify proxy behavior to "
f"bypass mTLS, 授權, or rate limiting. "
f"Verify this filter is authorized."
),
})
def run_audit(self) -> list[dict]:
"""Run complete service mesh audit for AI namespace."""
self.findings = []
self.check_sidecar_coverage()
self.check_mtls_policy()
self.check_authorization_policies()
self.check_envoy_filter_injection()
return self.findings
if __name__ == "__main__":
import sys
ns = sys.argv[1] if len(sys.argv) > 1 else "ai-推論"
auditor = ServiceMeshAIAuditor(namespace=ns)
findings = auditor.run_audit()
for f in findings:
print(f"[{f['severity']}] {f['title']}")
print(f" {f['detail']}\n")Attacking Service Mesh in AI Deployments
Sidecar Bypass Techniques
The service mesh sidecar intercepts traffic through iptables rules that redirect all inbound and outbound traffic through the proxy. 然而, several techniques can bypass this interception:
1. UID-based bypass: Istio's iptables rules exclude traffic from the proxy's own UID (1337 by default). If 攻擊者 can run processes as UID 1337 inside a container, their traffic bypasses the proxy entirely.
2. Init container race condition: During pod startup, 存在 a brief window between the init container setting up iptables rules and the sidecar becoming ready. Containers that start network operations during this window may communicate without mesh protection.
3. Host networking: Pods with hostNetwork: true bypass Kubernetes networking and the service mesh entirely. GPU workloads sometimes use host networking for direct RDMA access to InfiniBand devices.
4. Excluded ports and IP ranges: Istio allows excluding specific ports and IP ranges from interception via annotations (traffic.sidecar.istio.io/excludeOutboundPorts). AI teams may exclude high-throughput data ports to reduce latency, inadvertently creating 安全 gaps.
"""
Detect service mesh sidecar bypass opportunities in AI pods.
"""
import subprocess
import json
from typing import Any
def detect_sidecar_bypasses(namespace: str = "ai-推論") -> list[dict]:
"""
識別 pods and configurations that allow bypassing
the service mesh sidecar proxy.
"""
findings = []
result = subprocess.run(
["kubectl", "get", "pods", "-n", namespace, "-o", "json"],
capture_output=True, text=True, timeout=30,
)
if result.returncode != 0:
return findings
pods = json.loads(result.stdout)
for pod in pods.get("items", []):
name = pod["metadata"]["name"]
annotations = pod.get("metadata", {}).get("annotations", {})
spec = pod.get("spec", {})
# Check for excluded ports
excluded_out = annotations.get(
"traffic.sidecar.istio.io/excludeOutboundPorts", ""
)
excluded_in = annotations.get(
"traffic.sidecar.istio.io/excludeInboundPorts", ""
)
if excluded_out:
findings.append({
"severity": "HIGH",
"title": f"Outbound port exclusion: {name}",
"detail": (
f"Excluded outbound ports: {excluded_out}. "
f"Traffic on these ports bypasses mTLS and "
f"授權 policies."
),
})
if excluded_in:
findings.append({
"severity": "HIGH",
"title": f"Inbound port exclusion: {name}",
"detail": (
f"Excluded inbound ports: {excluded_in}. "
f"Inbound traffic on these ports is not authenticated."
),
})
# Check for excluded IP ranges
excluded_ips = annotations.get(
"traffic.sidecar.istio.io/excludeOutboundIPRanges", ""
)
if excluded_ips:
findings.append({
"severity": "MEDIUM",
"title": f"Excluded IP ranges: {name}",
"detail": (
f"Excluded ranges: {excluded_ips}. Traffic to these "
f"IPs bypasses the mesh."
),
})
# Check for host networking
if spec.get("hostNetwork", False):
findings.append({
"severity": "CRITICAL",
"title": f"Host networking: {name}",
"detail": (
"Pod uses host networking, completely bypassing the "
"service mesh. All traffic is unencrypted and "
"unauthenticated."
),
})
# Check for containers running as istio-proxy UID (1337)
for container in spec.get("containers", []):
sec_ctx = container.get("securityContext", {})
run_as_user = sec_ctx.get("runAsUser")
if run_as_user == 1337:
findings.append({
"severity": "CRITICAL",
"title": (
f"Container runs as mesh UID: "
f"{name}/{container['name']}"
),
"detail": (
"Container runs as UID 1337 (Istio proxy UID). "
"Traffic from this container bypasses iptables "
"interception and the mesh entirely."
),
})
# Check for privileged init containers that could modify iptables
for init in spec.get("initContainers", []):
sec = init.get("securityContext", {})
caps = sec.get("capabilities", {}).get("add", [])
if "NET_ADMIN" in caps or sec.get("privileged", False):
# 這是 expected for istio-init, check if it's not
if init["name"] not in ("istio-init", "linkerd-init"):
findings.append({
"severity": "HIGH",
"title": (
f"NET_ADMIN init container: "
f"{name}/{init['name']}"
),
"detail": (
"Non-mesh init container with NET_ADMIN can "
"modify iptables rules to bypass sidecar "
"interception."
),
})
return findingsmTLS Downgrade 攻擊
In PERMISSIVE mTLS mode, the sidecar accepts both TLS and plaintext connections. 攻擊者 who can send traffic directly to a service (bypassing their own sidecar) can communicate in plaintext, defeating the encryption and 認證 that mTLS provides.
The attack works when:
- The target service is in PERMISSIVE mode (common during mesh migration).
- 攻擊者 can bypass their own sidecar (using any of the techniques above).
- 攻擊者 sends a plaintext request to the target's sidecar, which accepts it.
這是 particularly dangerous for model serving endpoints, 因為 攻擊者 can intercept or forge 推論 requests without the source identity verification that mTLS provides.
Traffic Manipulation in Inference Chains
In AI microservice architectures, 攻擊者 who can intercept traffic between services (by bypassing the mesh or compromising a service in the chain) can manipulate 推論 results at multiple points:
Preprocessing manipulation: By intercepting traffic between the API gateway and the preprocessor, 攻擊者 can modify the 輸入 before it reaches 模型. 例如, altering 符元 IDs or 嵌入向量 vectors in transit to cause targeted misclassification while the original request appears legitimate.
Model routing manipulation: Intercepting traffic between the preprocessor and 模型 router allows 攻擊者 to redirect requests to a specific model version. If an older model version has known 漏洞 or biases, 攻擊者 can force all requests through that version.
Response manipulation: By intercepting 模型 server's response before it reaches the postprocessor or the client, 攻擊者 can replace model outputs with 攻擊者-chosen values. 這是 equivalent to a man-in-the-middle attack on the 推論 pipeline and can be used to insert misinformation, alter classification results, or inject malicious content into LLM responses.
Ensemble 投毒: In ensemble architectures where multiple models contribute to a final prediction, 攻擊者 who can modify the 輸出 of just one model in the ensemble can influence the final result. If the ensemble uses simple averaging or voting, controlling one model's 輸出 provides partial control over the final 輸出.
"""
Demonstrate traffic manipulation 偵測 in AI 推論 pipelines.
This 監控 tool compares model server responses with what
downstream services report to detect in-transit manipulation.
"""
import hashlib
import json
import time
from dataclasses import dataclass
from typing import Optional
@dataclass
class InferenceAuditRecord:
"""Records model 輸出 at 模型 server for later comparison."""
request_id: str
model_name: str
output_hash: str
timestamp: float
output_size: int
class InferenceIntegrityMonitor:
"""
Detect manipulation of 推論 results in transit
by comparing outputs at different points in the pipeline.
"""
def __init__(self):
self.model_records: dict[str, InferenceAuditRecord] = {}
self.discrepancies: list[dict] = []
def record_model_output(
self,
request_id: str,
model_name: str,
output_data: bytes,
) -> InferenceAuditRecord:
"""
Record 模型's actual 輸出 at 模型 server.
This runs as a sidecar or interceptor at 模型 pod.
"""
output_hash = hashlib.sha256(output_data).hexdigest()
record = InferenceAuditRecord(
request_id=request_id,
model_name=model_name,
output_hash=output_hash,
timestamp=time.time(),
output_size=len(output_data),
)
self.model_records[request_id] = record
return record
def verify_downstream_output(
self,
request_id: str,
received_data: bytes,
verification_point: str = "postprocessor",
) -> Optional[dict]:
"""
Verify that the 輸出 received by a downstream service
matches what 模型 actually produced.
"""
record = self.model_records.get(request_id)
if record is None:
return {
"severity": "MEDIUM",
"request_id": request_id,
"detail": (
f"No model 輸出 record for request {request_id}. "
f"Cannot verify integrity at {verification_point}."
),
}
received_hash = hashlib.sha256(received_data).hexdigest()
if received_hash != record.output_hash:
discrepancy = {
"severity": "CRITICAL",
"request_id": request_id,
"model": record.model_name,
"verification_point": verification_point,
"expected_hash": record.output_hash[:16],
"received_hash": received_hash[:16],
"expected_size": record.output_size,
"received_size": len(received_data),
"detail": (
"Model 輸出 was modified in transit between "
f"模型 server and {verification_point}. "
"Possible man-in-the-middle attack on 推論 pipeline."
),
}
self.discrepancies.append(discrepancy)
return discrepancy
return None # 輸出 matchesLatency-Based Service Identification
Even when mTLS is enforced, 攻擊者 within the mesh can use latency measurements to fingerprint services and infer the AI pipeline topology. By measuring response times from different service endpoints, 攻擊者 can:
- 識別 which services use GPU 推論 (GPU services have characteristic latency distributions with higher variance)
- Determine model complexity from 推論 latency (larger models take longer)
- Map the 推論 chain by measuring end-to-end latency and subtracting individual service latencies
- Detect when models are being updated (latency spikes during model loading)
This information is valuable for planning targeted attacks against specific pipeline components.
Practical 範例
Service Mesh 安全 Configuration for AI
#!/usr/bin/env bash
# Apply hardened service mesh configuration for AI 推論 namespace
# This script configures Istio 安全 policies
NAMESPACE="${1:-ai-推論}"
echo "=== Applying Service Mesh Hardening for $NAMESPACE ==="
# 1. Enforce STRICT mTLS
kubectl apply -f - <<EOF
apiVersion: 安全.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: strict-mtls
namespace: $NAMESPACE
spec:
mtls:
mode: STRICT
EOF
echo "[+] STRICT mTLS PeerAuthentication applied"
# 2. Default deny all traffic
kubectl apply -f - <<EOF
apiVersion: 安全.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: $NAMESPACE
spec:
{}
EOF
echo "[+] Default deny AuthorizationPolicy applied"
# 3. Allow only specific 推論 pipeline traffic
kubectl apply -f - <<EOF
apiVersion: 安全.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-推論-pipeline
namespace: $NAMESPACE
spec:
selector:
matchLabels:
app: model-server
rules:
- from:
- source:
principals:
- "cluster.local/ns/$NAMESPACE/sa/model-router"
to:
- operation:
methods: ["POST"]
paths: ["/v1/completions", "/v2/models/*/infer"]
EOF
echo "[+] Model server access restricted to model-router only"
# 4. Allow router access from preprocessor only
kubectl apply -f - <<EOF
apiVersion: 安全.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-router-access
namespace: $NAMESPACE
spec:
selector:
matchLabels:
app: model-router
rules:
- from:
- source:
principals:
- "cluster.local/ns/$NAMESPACE/sa/preprocessor"
- "cluster.local/ns/$NAMESPACE/sa/api-gateway"
EOF
echo "[+] Model router access restricted to preprocessor and gateway"
echo ""
echo "=== Configuration Complete ==="
echo "Verify with: istioctl analyze -n $NAMESPACE"Performance vs. 安全 Trade-offs in AI Meshes
The fundamental tension in AI service mesh deployments is latency. Sidecar proxies add per-hop latency (typically 0.5-2ms per hop) that accumulates through the 推論 chain. For a pipeline with 5 services, the mesh adds 5-20ms of overhead — potentially doubling the latency of a fast 推論 endpoint.
This leads to predictable compromises that red teamers should look for:
gRPC bypass for model serving: Some teams exclude model serving gRPC ports from sidecar interception 因為 the proxy's HTTP/2 handling adds measurable latency to large tensor transfers. This creates an unencrypted, unauthenticated channel for the most sensitive traffic — the actual model inputs and outputs.
Metrics and health check exclusions: Health check and metrics endpoints are frequently excluded from mTLS to simplify 監控 integration. While these endpoints seem low-risk, they often expose information about model versions, GPU utilization, request counts, and error rates that aids reconnaissance.
Init container resource allocation: The istio-init container requires NET_ADMIN capability to set up iptables rules. In GPU clusters with tight resource budgets, the additional resource overhead of sidecar containers (CPU and memory) reduces the resources available for GPU workloads, creating pressure to use smaller sidecars with less logging and 監控 capability.
Connection pooling and persistence: Model serving typically uses persistent gRPC connections for efficiency. The sidecar proxy must maintain these connection pools, and misconfigured connection limits can cause dropped requests under load. This operational pain point leads teams to bypass the proxy for model-to-model communication in ensemble architectures.
防禦 and 緩解
Enforce STRICT mTLS everywhere: Never leave AI namespaces in PERMISSIVE mode except during active migration. PERMISSIVE should be temporary and monitored. Set up alerts for any PeerAuthentication policy that is not STRICT.
實作 least-privilege AuthorizationPolicies: Define explicit allow rules 對每個 service-to-service communication path in the 推論 pipeline. Use service account principals for fine-grained identity matching. Start with a default-deny policy and add allows incrementally.
Minimize sidecar bypass annotations: Audit all port and IP exclusion annotations. For AI workloads that require port exclusions for performance, 實作 application-layer TLS on those ports as compensation. Document every exclusion with a 安全 review.
Monitor mesh telemetry for anomalies: Use the mesh's observability data to detect unauthorized communication patterns, such as a preprocessing service directly contacting 模型 server (bypassing the router). Set up Kiali or similar mesh visualization tools to 識別 unexpected communication graphs.
Use Kubernetes NetworkPolicies as 防禦-in-depth: NetworkPolicies operate at L3/L4 independently of the service mesh. Even if the mesh is bypassed, NetworkPolicies provide a second layer of network segmentation. These are especially important for GPU pods that may bypass the mesh for performance reasons.
Audit EnvoyFilters and custom mesh configurations: EnvoyFilters are powerful and can silently disable 安全 features. Restrict EnvoyFilter creation through RBAC and audit all existing filters. 考慮 using OPA/Gatekeeper to enforce policies on EnvoyFilter resources.
Right-size sidecar resources: Allocate sufficient CPU and memory to sidecar proxies in AI namespaces so that resource pressure does not become a justification for bypassing the mesh. Profile the actual resource consumption of sidecars under realistic 推論 load and provision accordingly.
考慮 ambient mesh architectures: Newer service mesh implementations like Istio Ambient mode remove the sidecar proxy entirely, instead using a shared per-node ztunnel for L4 mTLS and optional per-namespace waypoint proxies for L7 policy. This significantly reduces the resource overhead and eliminates many sidecar-specific bypass vectors. For latency-sensitive AI workloads, ambient mesh can provide mTLS encryption without the per-hop latency penalty of sidecar proxies, making it easier to justify full mesh coverage for all AI services including performance-critical 推論 endpoints.
參考文獻
- Istio. (2024). "安全 最佳實務." https://istio.io/latest/docs/ops/best-practices/安全/
- NIST. (2022). "SP 800-204A: Building Secure Microservices-based Applications Using Service-Mesh Architecture." https://doi.org/10.6028/NIST.SP.800-204A
- Linkerd. (2024). "Automatic mTLS." https://linkerd.io/2/features/automatic-mtls/
- MITRE ATLAS. "Lateral Movement in ML Infrastructure." https://atlas.mitre.org/