Transportation AI Security
Attacking AI systems in transportation: autonomous vehicle perception manipulation, traffic management AI exploitation, rail control system attacks, and aviation AI security testing.
Transportation AI encompasses some of the most safety-critical AI deployments in existence. Autonomous vehicles make split-second decisions that determine whether passengers, pedestrians, and other road users live or die. Air traffic management AI coordinates thousands of aircraft in shared airspace. Rail control AI manages train movements at speeds where seconds of miscalculation cause collisions. The safety stakes in transportation AI are immediate, direct, and irreversible.
This page covers attack techniques for the major transportation AI domains. Each domain has distinct regulatory requirements, safety architectures, and operational constraints that shape the attack surface.
Autonomous Vehicle Perception Attacks
Sensor Fusion Attack Surface
Autonomous vehicles rely on multiple sensor modalities — cameras, LiDAR, radar, ultrasonic sensors, and GPS — that are fused by AI to create a unified model of the environment. Each sensor modality has distinct vulnerabilities:
| Sensor | Attack Vector | Example | Detection Difficulty |
|---|---|---|---|
| Camera | Adversarial patches, projections, modified signs | Stop sign with adversarial sticker misclassified as speed limit | High — patches look benign to humans |
| LiDAR | Spoofed point clouds, laser injection | Phantom objects injected into LiDAR field of view | Medium — requires specialized equipment |
| Radar | RF spoofing, jamming | False vehicle detection at phantom distances | Medium — detectable by signal analysis |
| GPS | GPS spoofing, denial | Vehicle believes it is on a different road | Low — GPS spoofing is well-characterized |
| Ultrasonic | Acoustic injection | False proximity alerts during parking | Low — limited range and impact |
Physical-World Adversarial Attacks
The most studied transportation AI attack category is physical-world adversarial examples — modifications to the physical environment that cause the vehicle's AI to misclassify objects.
# Adversarial traffic sign testing framework
# NOTE: Simulation environment only
def test_sign_classification_robustness(perception_model,
sign_images,
perturbation_types):
"""
Test traffic sign classification against adversarial
perturbations in a simulated environment.
"""
results = []
for sign in sign_images:
baseline = perception_model.classify(sign.image)
for perturbation in perturbation_types:
modified = apply_perturbation(
sign.image,
perturbation_type=perturbation,
constraint="physical_world", # L-inf bounded,
# printable colors
)
adversarial_result = perception_model.classify(modified)
results.append({
"sign_type": sign.label,
"perturbation": perturbation,
"original_classification": baseline.label,
"adversarial_classification": adversarial_result.label,
"confidence_original": baseline.confidence,
"confidence_adversarial": adversarial_result.confidence,
"misclassified": adversarial_result.label != sign.label,
"safety_critical": is_safety_critical_misclassification(
sign.label, adversarial_result.label
),
})
return results
def is_safety_critical_misclassification(true_label, predicted_label):
"""
Determine if a misclassification could cause a safety hazard.
"""
critical_pairs = {
("stop_sign", "speed_limit_60"),
("red_light", "green_light"),
("pedestrian_crossing", "no_sign"),
("yield", "speed_limit_80"),
("do_not_enter", "one_way"),
}
return (true_label, predicted_label) in critical_pairsSensor Fusion Exploitation
Attacking individual sensors is often insufficient because sensor fusion cross-validates information across modalities. A more sophisticated attack targets the fusion algorithm itself:
-
Inconsistency exploitation. Present conflicting information to different sensors and observe how the fusion algorithm resolves conflicts. Some fusion algorithms default to specific sensors in conflict situations, creating predictable override behavior.
-
Confidence manipulation. If the fusion algorithm weights sensor inputs by confidence scores, an attacker can inject high-confidence false data on one sensor to override correct low-confidence data from other sensors.
-
Temporal desynchronization. Introduce timing delays in one sensor's data stream so that the fusion algorithm combines stale data from the delayed sensor with current data from other sensors. The resulting fused perception may contain ghost objects or miss real objects.
Traffic Management AI
Signal Control Manipulation
AI-powered traffic signal systems (adaptive signal control) adjust signal timing based on real-time traffic conditions measured by cameras, loop detectors, and connected vehicle data. An adversary who can influence the AI's traffic perception can manipulate signal timing.
# Traffic signal AI manipulation scenarios
traffic_ai_attacks = {
"congestion_creation": {
"description": "Cause the AI to create congestion by "
"manipulating signal timing",
"technique": "Inject false detector data showing heavy "
"traffic on low-priority approaches, causing "
"the AI to allocate green time away from the "
"main corridor",
"impact": "Gridlock on major arterials",
},
"emergency_vehicle_disruption": {
"description": "Interfere with AI-managed emergency vehicle "
"preemption (EVP)",
"technique": "Spoof EVP signals (optical or radio) from "
"multiple directions simultaneously, creating "
"conflicting preemption requests",
"impact": "Emergency vehicles delayed or forced to stop",
"severity": "Critical — directly endangers public safety",
},
"pedestrian_safety_compromise": {
"description": "Cause the AI to shorten pedestrian crossing "
"phases or eliminate pedestrian-only phases",
"technique": "Inject detector data showing no pedestrian "
"activity, causing the AI to skip pedestrian "
"phases to optimize vehicular throughput",
"impact": "Pedestrians forced to cross during vehicle phases",
},
"corridor_manipulation": {
"description": "Manipulate coordinated signal timing along "
"a corridor to break the green wave",
"technique": "Alter vehicle count data at specific signals "
"to desynchronize the coordination pattern",
"impact": "Stop-and-go traffic, increased emissions, "
"driver frustration leading to dangerous behavior",
},
}Connected Vehicle Data Exploitation
As vehicles increasingly broadcast Basic Safety Messages (BSMs) via V2X (Vehicle-to-Everything) communication, traffic management AI incorporates this data. BSMs include vehicle position, speed, heading, and acceleration. Spoofed BSMs can create phantom traffic:
- Phantom congestion. Broadcast BSMs from many nonexistent vehicles in a specific area to make the AI believe there is heavy traffic
- Speed manipulation. Broadcast BSMs with false speed data to make the AI miscalculate signal timing
- Incident simulation. Broadcast BSMs indicating a stopped vehicle or sudden deceleration to trigger the AI's incident detection and response
Rail Control AI
Scheduling and Dispatch AI
Rail systems increasingly use AI for train scheduling, speed optimization, and dispatch decisions. The safety constraints are absolute — two trains cannot occupy the same track segment at the same time, and stopping distances at high speed are measured in kilometers.
# Rail AI safety testing scenarios
rail_ai_tests = {
"schedule_conflict_injection": {
"description": "Test whether the AI can be manipulated to "
"create scheduling conflicts between trains",
"technique": "Modify train position or speed data to cause "
"the AI to allow conflicting track reservations",
"safety_system": "Interlocking system should prevent "
"conflicting routes regardless of AI",
"test_verifies": "Independence of safety interlocking from AI",
},
"speed_profile_manipulation": {
"description": "Modify the AI's speed profile calculations "
"to recommend excessive speed for track conditions",
"technique": "Alter grade, curvature, or weather data inputs "
"to the speed optimization AI",
"safety_system": "Automatic Train Protection (ATP) should "
"enforce speed limits independently",
"test_verifies": "ATP independence from AI speed recommendations",
},
"maintenance_window_shrinkage": {
"description": "Cause the AI to schedule trains during "
"maintenance windows when workers are on track",
"technique": "Manipulate maintenance schedule data or "
"track occupancy information",
"severity": "Critical — worker safety",
},
}Safety System Independence
The most critical finding in rail AI testing is whether safety systems (interlocking, ATP, axle counters) operate independently of AI components. If the AI can influence or override safety systems, it is a critical vulnerability regardless of the AI's accuracy.
Aviation AI
Regulatory Framework
Aviation AI is subject to the most rigorous safety certification requirements of any domain. Software used in aircraft systems must be certified to DO-178C, and AI/ML components face additional scrutiny because traditional DO-178C assumes deterministic software behavior.
The FAA and EASA are developing guidance for AI in aviation through:
- FAA AI/ML Roadmap — Phased approach to certifying AI systems in aviation
- EASA AI Concept Paper — European framework for AI assurance in aviation
- SAE AIR 6987 — Industry guidance on AI in aeronautical systems
Testable Attack Surfaces
Given the certification constraints, aviation AI red team testing focuses on:
-
Ground-based AI systems. Air traffic flow management, airport operations, and maintenance AI run on ground infrastructure and are more accessible for testing than airborne systems.
-
Training and simulation AI. AI used in pilot training simulators and crew resource management tools can be tested without affecting flight operations.
-
Data integrity. Weather data, NOTAM processing, and flight planning AI consume data from multiple sources that can be poisoned.
# Aviation AI testing — ground systems only
aviation_ai_test_areas = {
"atfm_manipulation": {
"description": "Test AI-assisted Air Traffic Flow Management "
"for manipulation that creates unsafe spacing",
"scope": "Ground-based ATFM decision support only",
"exclusions": "No testing of airborne systems or ATC "
"communication systems",
},
"maintenance_ai": {
"description": "Test AI predictive maintenance for aircraft "
"for manipulation that delays critical maintenance",
"scope": "Maintenance prediction system in test environment",
"regulatory": "Findings reported per 14 CFR Part 43",
},
"weather_data_integrity": {
"description": "Test whether weather data poisoning can "
"affect AI-assisted flight planning",
"scope": "Flight planning tools in test environment",
"impact": "Incorrect fuel calculations, routing through "
"hazardous weather",
},
}Defensive Recommendations
Safety system independence verification
Regularly verify that safety systems (ATP, interlocking, collision avoidance) operate independently of AI components. The safety system must function correctly even if the AI is completely compromised.
Multi-modal sensor cross-validation
Implement cross-validation across sensor modalities and flag inconsistencies for human review. No single sensor modality should be able to override the consensus of other modalities.
Operational design domain enforcement
Enforce strict Operational Design Domain (ODD) boundaries for autonomous systems. When conditions exceed the ODD (weather, traffic, road conditions), the AI must hand control to a human operator or enter a safe state.
Adversarial robustness testing in certification
Incorporate adversarial robustness testing into safety certification processes. DO-178C and ISO 26262 equivalence classes should include adversarial inputs alongside boundary values and error conditions.
Further Reading
- Critical Infrastructure AI Security Overview — Broader critical infrastructure context
- SCADA/ICS + AI Attacks — Foundational SCADA attack techniques
- Power Grid AI — Energy sector AI security