April 11, 2026 Newsletter
Industrial and critical-infrastructure organizations are being pushed to “add AI” to operational technology (OT) environments—plants, lines, substations, water systems, building controls.
The problem: OT was built to be stable and predictable. Many modern AI systems are not. That mismatch creates real security risk, and in OT, security risk quickly becomes safety risk.
A recent Dark Reading feature frames it clearly: AI in OT can trigger a cascade of challenges—trust in data, model drift, explainability gaps, lifecycle issues, cloud dependencies, and a bigger attack surface.
From InfoSight’s perspective, the right path is not “AI-first.” It’s “trust-first.” You don’t automate decisions in a system you can’t verify.
What “AI in OT” Actually Means (in plain language)
In OT, “AI” usually shows up in a few ways:
Recommendations for operators (alerts, suggested actions, prioritization)
Anomaly detection (spotting unusual behavior in network traffic, sensors, or processes)
Predictive maintenance (failure likelihood, replacement timing)
Optimization (throughput, energy, dosing, torque, pressure tuning)
Agent-style automation (AI that takes actions—changes configs, opens/adjusts controls, triggers workflows)
The further you move from “advisory” to “autonomous action,” the higher the risk.
Why OT and Modern AI Clash
1) OT demands predictability; many AI systems are nondeterministic
A lot of current AI (especially LLM-based systems and “agents”) can generate different outputs from the same input. That’s normal in AI. In OT, it’s a problem because operations rely on repeatability and tight change control.
Translation: OT wants “same input, same outcome.” Some AI can’t guarantee that.
2) AI is only as good as the data feeding it — and OT data often isn’t trustworthy by default
Many OT environments still struggle with basics like strong device identity, authenticated telemetry, firmware integrity, and controlled updates. When the data source can be spoofed or tampered with, the AI’s conclusion is untrustworthy—even if the model is “accurate.”
Translation: If a sensor reading can be faked, the AI can be manipulated.
3) Drift and lifecycle reality: OT runs for decades; AI needs continuous care
OT assets have long lifecycles. AI models often need periodic validation, retraining, and monitoring to avoid “drift” (slow loss of accuracy as conditions change). That ongoing lifecycle doesn’t match how many OT environments operate today.
Translation: “Set it and forget it” doesn’t exist for AI—especially not in OT.
4) Cloud dependence collides with OT connectivity constraints
Many OT networks are intentionally limited in outbound connectivity. AI vendors often rely on cloud update paths, telemetry pipelines, and remote management. That dependency conflicts with OT security and reliability constraints.
Translation: OT can’t always “phone home,” and often shouldn’t.
5) Explainability and accountability: “Why did it do that?” must have an answer
In OT, operators need to understand why something is happening before acting—because acting can change physical outcomes. If the AI can’t explain itself clearly, it can increase workload and risk instead of reducing it.
Translation: If the output isn’t explainable, it’s not operationally usable.
6) Attackers will target the AI layer and the humans relying on it
AI expands the attack surface: models, pipelines, integrations, update mechanisms, and the UI operators trust. Dark Reading highlights the concern that attackers can mask malicious activity so operator views appear normal—an old tactic that becomes more scalable with AI.
Translation: If defenders rely on AI, attackers will try to blind or mislead it.
What Government Guidance Gets Right (and what it implies)
In December 2025, CISA, NSA, ASD’s ACSC and partners released joint guidance outlining four principles for integrating AI into OT: understand AI, consider AI use in the OT domain, establish AI governance and assurance frameworks, and embed safety and security practices.
InfoSight read of the message: treat AI in OT like a high-consequence engineering change, not like a normal IT tool rollout.
InfoSight’s Practical Approach: “Trust-First AI” for OT
Step 1 — Classify AI use cases by blast radius
Tier 1 (Lowest risk): passive anomaly detection / monitoring (no control changes)
Tier 2: decision support (recommendations with operator approval)
Tier 3 (Highest risk): autonomous action (control changes, orchestration, closed-loop tuning)
Keep Tier 3 off-limits until Tier 1–2 are stable, governed, and validated.
Step 2 — Build the trust layer before you build automation
Minimum OT trust controls before serious AI:
Asset inventory you can defend (what’s on the network, what firmware/software, what owners)
Network segmentation that limits lateral movement and contains failures
Secure remote access (MFA, PAM, jump hosts, strict vendor access governance)
Integrity controls where feasible (signed firmware/updates, validated configs, controlled change windows)
Continuous monitoring (OT network visibility + alert triage that operators can use)
If you can’t trust identity, telemetry, and change control, you can’t trust AI outputs.
Step 3 — Lock down the AI data pipeline as a security boundary
Treat the AI pipeline like production control logic:
Authenticate and protect data sources
Minimize and scope data flows
Validate inputs (sanity checks, ranges, rate-of-change constraints)
Log decisions and inputs for forensic replay
Plan for rollback and fail-safe behavior
Step 4 — Make “human-in-the-loop” mandatory for consequential actions
AI can speed up detection and prioritization. It should not bypass operator judgment where safety or uptime is at stake.
Design requirements:
Clear explanations operators can act on
Confidence indicators tied to measurable signals
“Hold points” where automation stops and a human approves
Manual override that’s simple and tested
Step 5 — Start where AI is most valuable and least invasive
Even critics of agentic AI in OT often agree that passive monitoring and anomaly detection using traditional ML can add value with lower operational risk when implemented correctly.
That’s the right starting point: improve visibility, reduce alert fatigue, and detect abnormal behavior—without touching control.
Easy-to-execute “Do Now” actions (AI-readiness without the hype)
Document the top 5 OT processes where a bad decision causes safety or outage impact.
Validate segmentation (zones/conduits) and remove unnecessary pathways.
Standardize remote access (eliminate ad-hoc vendor tunnels; enforce MFA + PAM).
Establish a maintenance-window playbook for patching and changes (including rollback).
Deploy passive OT monitoring (TAP/SPAN + OT-aware detection) and baseline normal operations.
Run an incident tabletop that includes “AI failure” and “AI deception” scenarios (bad data, wrong recommendation, masked operator view).
Bottom Line
AI in OT can help—but only after you have defensible trust controls, governance, and lifecycle discipline. In OT, the goal is not “more automation.” The goal is safer, more reliable operations with less exposure. The fastest route there is passive monitoring and measurable improvements in integrity, segmentation, and response.
InfoSight can deliver an OT security assessment and AI-readiness review that maps current OT hygiene, trust controls, and monitoring maturity to a phased, low-risk adoption plan.
Source
Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.
One follow-up from a security expert—no spam, ever.
Enter your details below to download the PDF.