Transportation AI 安全
攻擊ing AI systems in transportation: autonomous vehicle perception manipulation, traffic management AI exploitation, rail control system attacks, and aviation AI security testing.
Transportation AI encompasses some of the most 安全-critical AI deployments in existence. Autonomous vehicles make split-second decisions that determine whether passengers, pedestrians, and other road users live or die. Air traffic management AI coordinates thousands of aircraft in shared airspace. Rail control AI manages train movements at speeds where seconds of miscalculation cause collisions. The 安全 stakes in transportation AI are immediate, direct, and irreversible.
This page covers attack techniques for the major transportation AI domains. Each domain has distinct regulatory requirements, 安全 architectures, and operational constraints that shape the 攻擊面.
Autonomous Vehicle Perception 攻擊
Sensor Fusion 攻擊 Surface
Autonomous vehicles rely on multiple sensor modalities — cameras, LiDAR, radar, ultrasonic sensors, and GPS — that are fused by AI to create a unified model of the environment. Each sensor modality has distinct 漏洞:
| Sensor | 攻擊 Vector | 範例 | 偵測 Difficulty |
|---|---|---|---|
| Camera | 對抗性 patches, projections, modified signs | Stop sign with 對抗性 sticker misclassified as speed limit | High — patches look benign to humans |
| LiDAR | Spoofed point clouds, laser injection | Phantom objects injected into LiDAR field of view | Medium — requires specialized equipment |
| Radar | RF spoofing, jamming | False vehicle 偵測 at phantom distances | Medium — detectable by signal analysis |
| GPS | GPS spoofing, denial | Vehicle believes it is on a different road | Low — GPS spoofing is well-characterized |
| Ultrasonic | Acoustic injection | False proximity alerts during parking | Low — limited range and impact |
Physical-World 對抗性 攻擊
The most studied transportation AI attack category is physical-world 對抗性 examples — modifications to the physical environment that cause the vehicle's AI to misclassify objects.
# 對抗性 traffic sign 測試 framework
# NOTE: Simulation environment only
def test_sign_classification_robustness(perception_model,
sign_images,
perturbation_types):
"""
測試 traffic sign classification against 對抗性
perturbations in a simulated environment.
"""
results = []
for sign in sign_images:
baseline = perception_model.classify(sign.image)
for perturbation in perturbation_types:
modified = apply_perturbation(
sign.image,
perturbation_type=perturbation,
constraint="physical_world", # L-inf bounded,
# printable colors
)
adversarial_result = perception_model.classify(modified)
results.append({
"sign_type": sign.label,
"perturbation": perturbation,
"original_classification": baseline.label,
"adversarial_classification": adversarial_result.label,
"confidence_original": baseline.confidence,
"confidence_adversarial": adversarial_result.confidence,
"misclassified": adversarial_result.label != sign.label,
"safety_critical": is_safety_critical_misclassification(
sign.label, adversarial_result.label
),
})
return results
def is_safety_critical_misclassification(true_label, predicted_label):
"""
Determine if a misclassification could cause a 安全 hazard.
"""
critical_pairs = {
("stop_sign", "speed_limit_60"),
("red_light", "green_light"),
("pedestrian_crossing", "no_sign"),
("yield", "speed_limit_80"),
("do_not_enter", "one_way"),
}
return (true_label, predicted_label) in critical_pairsSensor Fusion 利用
Attacking individual sensors is often insufficient 因為 sensor fusion cross-validates information across modalities. A more sophisticated attack targets the fusion algorithm itself:
-
Inconsistency 利用. Present conflicting information to different sensors and observe how the fusion algorithm resolves conflicts. Some fusion algorithms default to specific sensors in conflict situations, creating predictable override behavior.
-
Confidence manipulation. If the fusion algorithm weights sensor inputs by confidence scores, 攻擊者 can inject high-confidence false data on one sensor to override correct low-confidence data from other sensors.
-
Temporal desynchronization. Introduce timing delays in one sensor's data stream so that the fusion algorithm combines stale data from the delayed sensor with current data from other sensors. The resulting fused perception may contain ghost objects or miss real objects.
Traffic Management AI
Signal Control Manipulation
AI-powered traffic signal systems (adaptive signal control) adjust signal timing based on real-time traffic conditions measured by cameras, loop detectors, and connected vehicle data. An adversary who can influence the AI's traffic perception can manipulate signal timing.
# Traffic signal AI manipulation scenarios
traffic_ai_attacks = {
"congestion_creation": {
"description": "Cause the AI to create congestion by "
"manipulating signal timing",
"technique": "Inject false detector data showing heavy "
"traffic on low-priority approaches, causing "
"the AI to allocate green time away from the "
"main corridor",
"impact": "Gridlock on major arterials",
},
"emergency_vehicle_disruption": {
"description": "Interfere with AI-managed emergency vehicle "
"preemption (EVP)",
"technique": "Spoof EVP signals (optical or radio) from "
"multiple directions simultaneously, creating "
"conflicting preemption requests",
"impact": "Emergency vehicles delayed or forced to stop",
"severity": "Critical — directly endangers public 安全",
},
"pedestrian_safety_compromise": {
"description": "Cause the AI to shorten pedestrian crossing "
"phases or eliminate pedestrian-only phases",
"technique": "Inject detector data showing no pedestrian "
"activity, causing the AI to skip pedestrian "
"phases to optimize vehicular throughput",
"impact": "Pedestrians forced to cross during vehicle phases",
},
"corridor_manipulation": {
"description": "Manipulate coordinated signal timing along "
"a corridor to break the green wave",
"technique": "Alter vehicle count data at specific signals "
"to desynchronize the coordination pattern",
"impact": "Stop-and-go traffic, increased emissions, "
"driver frustration leading to dangerous behavior",
},
}Connected Vehicle Data 利用
As vehicles increasingly broadcast Basic 安全 Messages (BSMs) via V2X (Vehicle-to-Everything) communication, traffic management AI incorporates this data. BSMs include vehicle position, speed, heading, and acceleration. Spoofed BSMs can create phantom traffic:
- Phantom congestion. Broadcast BSMs from many nonexistent vehicles in a specific area to make the AI believe 存在 heavy traffic
- Speed manipulation. Broadcast BSMs with false speed data to make the AI miscalculate signal timing
- Incident simulation. Broadcast BSMs indicating a stopped vehicle or sudden deceleration to trigger the AI's incident 偵測 and response
Rail Control AI
Scheduling and Dispatch AI
Rail systems increasingly use AI for train scheduling, speed optimization, and dispatch decisions. The 安全 constraints are absolute — two trains cannot occupy the same track segment at the same time, and stopping distances at high speed are measured in kilometers.
# Rail AI 安全 測試 scenarios
rail_ai_tests = {
"schedule_conflict_injection": {
"description": "測試 whether the AI can be manipulated to "
"create scheduling conflicts between trains",
"technique": "Modify train position or speed data to cause "
"the AI to allow conflicting track reservations",
"safety_system": "Interlocking system should prevent "
"conflicting routes regardless of AI",
"test_verifies": "Independence of 安全 interlocking from AI",
},
"speed_profile_manipulation": {
"description": "Modify the AI's speed profile calculations "
"to recommend excessive speed for track conditions",
"technique": "Alter grade, curvature, or weather data inputs "
"to the speed optimization AI",
"safety_system": "Automatic Train Protection (ATP) should "
"enforce speed limits independently",
"test_verifies": "ATP independence from AI speed recommendations",
},
"maintenance_window_shrinkage": {
"description": "Cause the AI to schedule trains during "
"maintenance windows when workers are on track",
"technique": "Manipulate maintenance schedule data or "
"track occupancy information",
"severity": "Critical — worker 安全",
},
}安全 System Independence
The most critical finding in rail AI 測試 is whether 安全 systems (interlocking, ATP, axle counters) operate independently of AI components. If the AI can influence or override 安全 systems, it is a critical 漏洞 regardless of the AI's accuracy.
Aviation AI
Regulatory Framework
Aviation AI is subject to the most rigorous 安全 certification requirements of any domain. Software used in aircraft systems must be certified to DO-178C, and AI/ML components face additional scrutiny 因為 traditional DO-178C assumes deterministic software behavior.
The FAA and EASA are developing guidance for AI in aviation through:
- FAA AI/ML Roadmap — Phased approach to certifying AI systems in aviation
- EASA AI Concept Paper — European framework for AI assurance in aviation
- SAE AIR 6987 — Industry guidance on AI in aeronautical systems
Testable 攻擊 Surfaces
Given the certification constraints, aviation AI 紅隊 測試 focuses on:
-
Ground-based AI systems. Air traffic flow management, airport operations, and maintenance AI run on ground infrastructure and are more accessible for 測試 than airborne systems.
-
Training and simulation AI. AI used in pilot 訓練 simulators and crew resource management tools can be tested without affecting flight operations.
-
Data integrity. Weather data, NOTAM processing, and flight planning AI consume data from multiple sources that can be poisoned.
# Aviation AI 測試 — ground systems only
aviation_ai_test_areas = {
"atfm_manipulation": {
"description": "測試 AI-assisted Air Traffic Flow Management "
"for manipulation that creates unsafe spacing",
"scope": "Ground-based ATFM decision support only",
"exclusions": "No 測試 of airborne systems or ATC "
"communication systems",
},
"maintenance_ai": {
"description": "測試 AI predictive maintenance for aircraft "
"for manipulation that delays critical maintenance",
"scope": "Maintenance prediction system in 測試 environment",
"regulatory": "Findings reported per 14 CFR Part 43",
},
"weather_data_integrity": {
"description": "測試 whether weather 資料投毒 can "
"affect AI-assisted flight planning",
"scope": "Flight planning tools in 測試 environment",
"impact": "Incorrect fuel calculations, routing through "
"hazardous weather",
},
}Defensive Recommendations
安全 system independence verification
Regularly verify that 安全 systems (ATP, interlocking, collision avoidance) operate independently of AI components. The 安全 system must function correctly even if the AI is completely compromised.
Multi-modal sensor cross-validation
實作 cross-validation across sensor modalities and flag inconsistencies for human review. No single sensor modality should be able to override the consensus of other modalities.
Operational design domain enforcement
Enforce strict Operational Design Domain (ODD) boundaries for autonomous systems. When conditions exceed the ODD (weather, traffic, road conditions), the AI must hand control to a human operator or enter a safe state.
對抗性 robustness 測試 in certification
Incorporate 對抗性 robustness 測試 into 安全 certification processes. DO-178C and ISO 26262 equivalence classes should include 對抗性 inputs alongside boundary values and error conditions.
Further Reading
- Critical Infrastructure AI 安全 概覽 — Broader critical infrastructure context
- SCADA/ICS + AI 攻擊 — Foundational SCADA attack techniques
- Power Grid AI — Energy sector AI 安全