What this article answers
Article summary
To override AI hallucinations in industrial automation, engineers must place AI behind a deterministic PLC veto. The PLC should verify AI-requested commands against fixed limits, state permissives, and hardwired safety functions before any actuation occurs. This layered defense separates probabilistic optimization from deterministic execution.
AI does not become safe because it sounds confident. In industrial control, confidence is not a control variable.
The engineering conflict is straightforward: LLMs and agentic AI systems are probabilistic and non-deterministic, while PLCs execute logic on repeatable scan cycles with bounded behavior. That difference is not philosophical. It is architectural, and in safety-relevant contexts it is decisive.
In a recent internal batch of 500 simulated AI-generated setpoint anomalies run through OLLA Lab’s digital twin validation workflows, a PLC-side rate-of-change clamp plus explicit state permissives blocked 100% of the catastrophic out-of-bounds commands before they actuated the virtual valve and motor models. Methodology: n=500 injected anomaly cases across bounded speed, pressure, and valve-position tasks; baseline comparator = direct pass-through of AI-requested command to simulated actuator tag; time window = Ampergon Vallis internal lab runs conducted Q1 2026. This supports the value of deterministic verification in simulation. It does not constitute IEC 61508 certification, SIL evidence, or a claim about all plant architectures.
The practical answer is not to ban AI. It is to deny AI direct execution authority and require the PLC to hold a permanent deterministic veto.
Why do AI hallucinations require a deterministic veto in industrial automation?
AI hallucinations require a deterministic veto because AI outputs are not guaranteed to be bounded, repeatable, or scan-synchronous in the way PLC execution must be.
In a control system, an unsafe command is unsafe even if it was generated by a statistically impressive model. A valve does not care that the token probability looked persuasive.
IEC 61508 and ISO 13849 are built around predictable behavior, defined failure handling, and known safe states. Safety-related control functions must fail in ways that can be analyzed, bounded, and validated. Current LLM-style systems do not meet that bar because their failure modes are not exhaustively characterizable in the way deterministic safety logic requires. That is the real issue: not that AI is new, but that AI is not systematically bounded enough to own the final actuation path.
The scan cycle vs. the inference engine
A PLC executes logic cyclically. It scans inputs, solves logic, updates outputs, and repeats on a known interval, often in the low-millisecond range depending on platform, task structure, and load.
An AI inference engine does not behave that way. Its response time varies with model size, compute availability, network conditions, orchestration overhead, and prompt complexity. Even when average latency looks acceptable, worst-case timing and output behavior remain the problem.
That creates two classes of risk:
- Timing risk: the AI response may arrive late, out of sequence, or during an invalid machine state. - Content risk: the AI response may request an impossible, unsafe, or context-blind action.
A PLC can tolerate delayed or rejected requests. A pump train cannot tolerate fantasy.
What do the standards actually require?
The standards require predictable safety behavior, not fashionable software.
At a high level:
- IEC 61508 addresses functional safety of electrical, electronic, and programmable electronic safety-related systems.
- ISO 13849 addresses safety-related parts of control systems, particularly in machinery contexts.
- Both frameworks depend on defined architectures, validated behavior, and known responses to fault conditions.
That does not mean every non-deterministic software component is forbidden everywhere in an industrial stack. It means non-deterministic components should not be treated as the final safety authority. The distinction matters. Perception and optimization can be probabilistic; the veto and the shutdown path cannot.
What is a deterministic veto in PLC programming?
A deterministic veto is a hard-coded, scan-cycle-bound logic structure that evaluates a requested command and blocks, clamps, or overrides it when the command violates physical limits, process constraints, or machine state permissives.
This is an operational definition, not a slogan. A deterministic veto must be observable in logic and testable against fault cases.
In practice, a deterministic veto often includes:
- Bounds checking: rejecting or clamping values above or below allowable limits - Rate-of-change limiting: preventing abrupt changes beyond safe ramp rates - State permissives: allowing commands only in valid operating states - Proof feedback checks: requiring confirmation from field devices before advancing sequence - Alarm and trip handling: forcing a safe response on abnormal conditions - Mode isolation: preventing remote or AI-requested actions in local, maintenance, or faulted modes
If the AI requests 150% drive speed and the PLC clamps it to the configured maximum while raising an alarm, the veto worked. If the AI can write straight to the output image, the architecture is wrong.
Operational definitions that matter
These terms are often used loosely. They should not be.
- Deterministic veto: PLC logic that evaluates an external or AI-generated request on every scan and blocks unsafe execution through explicit bounds, permissives, and fault rules. - Layered defense: architectural separation between AI/IT functions that suggest or optimize and PLC/OT functions that verify, execute, and enforce safety boundaries. - Simulation-Ready: the ability to trace I/O causality, observe abnormal behavior, inject fault cases, diagnose the control response, and revise logic in a virtual environment before touching physical hardware.
“Simulation-Ready” is not shorthand for “can write ladder syntax.” It means the engineer can prove behavior under stress. Syntax is cheap; deployability is not.
What is the layered defense architecture for AI and PLCs?
The correct layered defense architecture gives AI influence without giving it unchecked authority.
The separation should be explicit:
### Layer 1: AI agent for perception or optimization
This layer may:
- estimate demand,
- suggest setpoints,
- recommend routing,
- classify operating conditions,
- generate draft logic or operator guidance.
This layer should write only to intermediate memory tags, message structures, or supervisory request variables. It should not write directly to physical outputs or safety memory.
### Layer 2: Deterministic veto in the PLC
This layer evaluates the AI request against fixed engineering rules such as:
- max/min allowable values,
- valid machine states,
- interlocks,
- permissives,
- sequence step requirements,
- alarm and trip conditions,
- rate-of-change limits.
This is where the command becomes either acceptable, modified, or rejected.
### Layer 3: Hardwired or certified safety execution path
This layer includes, as applicable:
- E-stop chains,
- safety relays,
- safety PLC tasks,
- contactors,
- STO circuits,
- guard interlocks,
- independent shutdown functions.
This layer must remain outside the AI memory map and outside any soft supervisory optimism. Hardwired safety exists because software occasionally develops opinions.
How do you program a bounds clamp and E-stop chain to override AI commands?
You program the veto by forcing every AI-requested command through explicit verification logic before it can influence the final control variable.
The key design principle is simple: AI requests are proposals, not outputs.
Implementing the setpoint clamp
A bounds clamp prevents impossible or unsafe values from reaching the actuator command.
Use a structure like this:
Structured Text / Ladder Logic translation example:
IF AI_Requested_Speed > Max_Allowable_Speed THEN Actual_Drive_Speed := Max_Allowable_Speed; Set Alarm_AI_Hallucination_Over_Speed; ELSIF AI_Requested_Speed < Min_Allowable_Speed THEN Actual_Drive_Speed := Min_Allowable_Speed; Set Alarm_AI_Hallucination_Under_Speed; ELSE Actual_Drive_Speed := AI_Requested_Speed; END_IF;
That is the minimum pattern, not the finished architecture.
A production-minded implementation usually adds:
- mode check: only accept AI requests in Auto mode - state permissive: only accept requests when the machine is in a valid sequence state - ROC clamp: limit change per scan or per second - quality bit: reject stale, invalid, or untrusted upstream data - timeout handling: revert to fallback value if the request stream drops out - alarm latching and operator visibility: make rejection visible and reviewable
A more complete control path often looks like this:
- Receive `AI_Requested_Speed`
- Validate source quality and freshness
- Confirm `System_Auto = TRUE`
- Confirm all permissives are true
- Clamp to engineering min/max
- Apply ROC limit
- Write to `Actual_Drive_Speed_Command`
- Trip or inhibit if safety chain is open
What should the ladder logic enforce before passing an AI command?
The PLC should enforce the same things a cautious commissioning engineer would ask before energizing equipment:
- Is the machine in the correct mode?
- Is the sequence at the correct step?
- Are all permissives satisfied?
- Are feedbacks healthy?
- Is the requested value within physical limits?
- Is the requested change plausible for this process?
- Is there any active trip or safety condition?
That last question tends to matter more than the architecture diagram.
The master E-stop chain
The master E-stop chain must sit outside the AI authority boundary because emergency stop behavior cannot depend on inference quality, network timing, or supervisory software state.
In practice:
- The E-stop path should be hardwired or handled in a certified safety function as required by the application.
- The AI system should neither write to nor suppress the E-stop state.
- The standard control task may observe the E-stop state, but it should not be the sole guardian of it.
- Any AI-requested command must collapse harmlessly when the E-stop chain opens.
A useful rule is this: if a network hiccup, model error, or parser failure can keep motion enabled, the design is not finished.
How do you test a deterministic veto against realistic fault cases?
You test it by injecting abnormal commands and proving that the PLC response remains bounded, observable, and correct under those conditions.
This is where many teams stop too early. A clean nominal run proves almost nothing.
At minimum, test these cases:
- AI requests above maximum allowable value
- AI requests below minimum allowable value
- abrupt step changes beyond safe ramp rate
- commands issued in the wrong machine state
- commands issued while a permissive is false
- stale or frozen upstream values
- oscillating or noisy request signals
- E-stop or trip activation during an active AI request
For each case, verify:
- final actuator command,
- alarm behavior,
- sequence behavior,
- operator visibility,
- recovery behavior after the fault clears.
A veto that clamps correctly but leaves the sequence stranded in an incoherent state is only half a solution.
How does OLLA Lab simulate non-deterministic AI failures?
OLLA Lab is useful here as a bounded validation environment where engineers can inject bad commands, observe equipment response, and revise ladder logic before any live hardware is involved.
That positioning matters. OLLA Lab is not a safety certifier and not a substitute for formal validation on the target platform. It is a practical environment for rehearsing high-risk commissioning logic in a contained way.
Within OLLA Lab, engineers can:
- build ladder logic in a web-based editor,
- run simulation without physical hardware,
- monitor tags, I/O, analog values, and PID-related variables,
- use scenario-based equipment models,
- compare ladder state against simulated machine behavior,
- revise logic after observing a fault case.
For this article’s use case, the relevant workflow is straightforward:
- Create an intermediate tag such as `AI_Requested_Speed`
- Route that tag through clamp and permissive logic
- Observe the resulting `Actual_Drive_Speed_Command`
- Inject abnormal values or unstable patterns
- Confirm the simulated motor, pump, or valve model never exceeds safe limits
- Review alarms and revise the logic
This is where OLLA Lab becomes operationally useful. You cannot responsibly test a hallucinated 200% valve-open command on a live process skid just to see what happens. Curiosity is valuable; bent hardware is expensive.
What “Simulation-Ready” looks like in practice
An engineer is Simulation-Ready when they can do all of the following in a virtual environment:
- trace cause-and-effect from input to output,
- observe whether a permissive or interlock blocked execution,
- identify the exact rung or condition that produced the response,
- compare control logic state to simulated equipment state,
- inject a fault and explain the resulting behavior,
- revise the logic and re-test until the response is bounded and repeatable.
That is the threshold that matters for commissioning preparation. Not “can place a timer instruction,” but “can prove why the machine did or did not move.”
What evidence should an engineer document when validating AI-veto logic?
The right artifact is a compact body of engineering evidence, not a screenshot gallery.
Use this structure:
State what acceptable behavior means in observable terms: allowable range, valid states, expected alarm response, trip behavior, and recovery behavior.
Document the exact abnormal input: overspeed request, invalid state command, stale tag, noisy signal, or E-stop during motion.
Record what changed in the logic: clamp added, permissive tightened, timeout introduced, alarm latched, ROC limit adjusted.
- System Description Define the machine or process cell, the controlled variable, the operating modes, and the relevant safety boundary.
- Operational definition of “correct”
- Ladder logic and simulated equipment state Show the control logic alongside the simulated actuator or process response so the causal chain is visible.
- The injected fault case
- The revision made
- Lessons learned State what the fault exposed and what design rule now follows from it.
This produces evidence that another engineer can review, challenge, and reproduce. That is the standard worth aiming for.
What are the common design mistakes when placing AI near safety PLC logic?
The most common mistake is allowing AI to bypass the verification layer.
Other recurring errors include:
- writing AI output directly to actuator command tags,
- treating average AI performance as a safety argument,
- forgetting stale-data detection,
- omitting state-based permissives,
- relying on HMI alarms instead of hard execution blocks,
- placing too much trust in simulation without platform-specific final validation,
- confusing generated ladder with validated ladder.
A generated rung is not a proven rung. Industrial control is still governed by what the machine actually does.
Conclusion
The correct pattern is not “AI controls the plant.” The correct pattern is “AI may suggest, but the PLC decides, and the safety layer can still say no.”
A deterministic veto is the engineering mechanism that makes that boundary real. It converts unbounded requests into bounded control behavior through clamps, permissives, interlocks, and independent safety functions. That is how you keep probabilistic software from becoming a physical incident.
OLLA Lab fits this workflow as a rehearsal and validation environment. It allows engineers to practice the uncomfortable cases—bad setpoints, invalid states, noisy signals, failed permissives, emergency stops—without using live equipment as the test bench. That is a more credible path to commissioning judgment than memorizing syntax and hoping the first real fault is gentle.
Keep exploring
Related Links
Related reading
Explore the Pillar 1 hub →Related reading
Related article 1 →Related reading
Related article 2 →Related reading
Related article 3 →Related reading
Book an OLLA Lab implementation walkthrough →References
- IEC 61131-3: Programmable controllers — Part 3: Programming languages - IEC 61508 overview (functional safety) - NIST AI Risk Management Framework (AI RMF 1.0) - Digital Twin in Manufacturing: A Categorical Literature Review and Classification (IFAC, DOI) - Digital Twin in Industry: State-of-the-Art (IEEE, DOI)