AI Industrial Automation

Article playbook

How to Separate AI Perception from PLC Safety: The “Medulla Oblongata” Architecture

This article explains why AI should remain upstream of deterministic PLC control, and how watchdogs, clamps, permissives, and fallback logic help validate AI-originated requests before equipment acts.

Direct answer

The “Medulla Oblongata” architecture separates non-deterministic AI perception from deterministic PLC control. In this model, AI may propose setpoints or classifications, but a PLC remains the hard real-time validation and execution layer that enforces limits, permissives, watchdogs, and safe-state behavior before any command reaches equipment.

What this article answers

Article summary

The “Medulla Oblongata” architecture separates non-deterministic AI perception from deterministic PLC control. In this model, AI may propose setpoints or classifications, but a PLC remains the hard real-time validation and execution layer that enforces limits, permissives, watchdogs, and safe-state behavior before any command reaches equipment.

AI is not a safety controller, and treating it like one is an architectural error before it becomes a commissioning error. The useful distinction is simple: AI is good at perception, estimation, and optimization; PLCs are good at deterministic execution, interlocks, and bounded control.

That distinction matters because industrial control systems do not fail in the abstract. They fail as missed heartbeats, stale values, race conditions, and outputs that move when they should not. The machine is rarely impressed by a clever model.

A recent internal Ampergon Vallis bench test supports the practical value of this separation. During a 100-hour OLLA Lab simulation exercise, asynchronous external setpoint updates injected into a closed-loop control scenario produced a 14% increase in integral windup fault events relative to a clamped and rate-limited PLC validation path, while the PLC-governed path held response variance to a deterministic 3 ms execution envelope for the validation rung sequence. [Methodology: 100-hour simulated test run; task = external setpoint handoff into a bounded control scenario; baseline comparator = direct asynchronous setpoint injection without clamp-and-watchdog validation; time window = single continuous 100-hour run.] This metric supports the value of defensive PLC validation in simulation. It does not support any broad claim about all AI systems, all plants, or formal safety certification.

Why is deterministic PLC execution required for AI-driven automation?

Deterministic execution is required because safety and primary control decisions must occur within known time bounds. AI inference engines, whether local or remote, do not guarantee that property.

A PLC executes its program in a bounded scan cycle. Inputs are read, logic is solved, outputs are written, and the cycle repeats at a defined interval. That interval may vary slightly by platform and program load, but it is engineered to remain predictable and observable. Predictability is the point.

AI systems operate differently. Their response time can vary with processor load, memory pressure, scheduler behavior, model size, thermal throttling, middleware delays, or network latency. Even when average inference is fast, worst-case timing is what matters when equipment can move, collide, overfill, overheat, or fail to trip.

The math of the scan cycle versus asynchronous inference

The engineering distinction is not philosophical. It is temporal.

  • PLC control path
  • Executes in a repeated scan cycle
  • Supports bounded logic evaluation
  • Is designed for deterministic I/O handling
  • Can be audited against timing and fault behavior
  • AI compute path
  • Executes asynchronously
  • May return results at irregular intervals
  • Can stall, timeout, jitter, or produce stale outputs
  • Often depends on software stacks not designed as primary safety logic

In practical terms, a PLC can be trusted to evaluate a permissive every scan. An AI service can be useful, but not trusted in the same way. Useful and trustworthy are not synonyms. Plants learn that distinction expensively when they forget it.

What standards say about deterministic control

IEC 61508 establishes the functional safety framework for electrical, electronic, and programmable electronic safety-related systems. A central implication for this discussion is straightforward: safety-related behavior requires demonstrable, bounded, and validated execution characteristics.

That does not mean every PLC program is automatically safe. It means safety functions require deterministic design, verification, and lifecycle discipline that asynchronous AI systems do not inherently provide. exida and related functional safety guidance make the same practical point in industry terms: if a function is safety-critical, timing uncertainty and unverifiable behavior are not minor inconveniences.

A concise correction is useful here: AI can assist a safety-related system, but it should not be treated as the primary deterministic safety layer. You cannot wire a light curtain to an LLM and call it architecture.

What is the “Medulla Oblongata” architecture in process control?

The “Medulla Oblongata” architecture is a layered control model in which AI proposes and the PLC disposes. The AI performs high-level perception or optimization; the PLC remains the hard real-time authority that validates, clamps, sequences, or rejects those requests before hardware acts.

The biological analogy is memorable because it is structurally accurate enough to be useful. The cortex interprets and plans. The medulla handles autonomic survival functions. In automation terms, AI may estimate product quality, detect objects, or suggest a feed-rate adjustment; the PLC still owns interlocks, stop chains, permissives, and bounded actuation.

The hierarchy of control

  • Interprets images, trends, or multivariable process context
  • Generates a classification, advisory, or requested setpoint
  • Operates asynchronously and may be wrong, delayed, or stale
  • Checks the request against hard limits
  • Verifies machine state, permissives, and interlocks
  • Confirms freshness through heartbeat or watchdog logic
  • Applies rate-of-change limits, clamps, and fallback behavior
  • Receives only PLC-approved commands
  • Includes drives, valves, contactors, actuators, and alarms
  • Moves to a safe or bounded state if validation fails
  1. Perception Layer (AI)
  2. Validation Layer (PLC)
  3. Execution Layer (Hardware)

This is not an anti-AI position. It is a pro-boundary position. Good architecture is mostly disciplined refusal.

What the PLC must always retain

The PLC must retain absolute authority over functions that require deterministic response and bounded failure behavior.

That typically includes:

  • Emergency stop processing
  • Safety interlock evaluation
  • Motion permissives
  • Collision avoidance logic
  • Hard travel or speed limits
  • Safe-state fallback on comms loss
  • Sequence gating for hazardous transitions
  • Final command arbitration to actuators

If an AI requests a speed increase, the PLC may allow it, reduce it, delay it, or reject it. The final answer belongs to the deterministic layer.

How do you program hard real-time safety limits in Ladder Logic?

You program hard real-time safety limits by treating AI outputs as untrusted external inputs. That means every AI-originated value must pass through explicit defensive logic before it can influence a machine or process.

This is where ladder logic stops being syntax practice and becomes commissioning judgment. A rung that merely “works” is not enough. The rung must also fail in a controlled way.

Essential defensive rungs for AI-to-PLC handshaking

#### 1. Watchdog timer for heartbeat supervision

A watchdog timer verifies that the AI or upstream compute system is still communicating within an acceptable interval.

- If the timer expires, the PLC:

  • The AI sends a heartbeat bit or incrementing value
  • The PLC resets a TON or equivalent timer when the heartbeat changes
  • invalidates the AI request,
  • forces a safe default,
  • raises a fault or alarm,
  • and prevents further execution until recovery conditions are met

A dead upstream service should not leave a live output path behind it. That is not intelligence; that is residue.

#### 2. Limit block for hard clamping

A limit instruction constrains requested values to a physically safe operating band.

Example use cases:

  • Clamp requested motor speed between minimum cooling speed and maximum safe RPM
  • Clamp a valve demand to a range that avoids hydraulic shock
  • Clamp a temperature setpoint to equipment design limits

Example ladder logic structure:

- Instruction: LIM (Limit Test) - Low Limit: 15.0 Hz - Test: `AI_Requested_Speed` - High Limit: 45.0 Hz - Output: `Safe_Speed_Permissive`

The point is not elegance. The point is that no upstream optimism can command overspeed.

#### 3. Rate-of-change limiting

A rate-of-change limiter prevents abrupt command changes that are technically valid in magnitude but unsafe in transition.

Examples:

  • Prevent a valve from moving from 10% to 100% in one update
  • Restrict VFD speed increases to a defined ramp
  • Limit setpoint movement per scan or per second

This matters because many failures occur in the transition, not the endpoint. The final number may be legal while the path to it is reckless.

#### 4. Staleness detection and sequence validity

A value can be numerically plausible and still operationally invalid.

The PLC should verify:

  • timestamp freshness or sequence count
  • machine mode compatibility
  • current phase of operation
  • permissive state before applying the request
  • whether the request belongs to the active recipe, batch step, or sequence state

A stale setpoint during the wrong sequence phase is just a polite form of bad data.

What “correct” means in this architecture

Correctness must be defined operationally, not cosmetically. In this context, a correct AI-to-PLC integration means:

  • the PLC accepts only fresh and bounded requests,
  • the machine moves only when permissives are true,
  • unsafe transitions are blocked,
  • comms loss produces a defined fallback state,
  • and every rejection path is observable in tags, alarms, or event logic.

That definition is more useful than “the rung compiled.” Compilation is a low bar. Equipment damage clears it regularly.

How do you validate AI-to-PLC handshaking before live commissioning?

You validate AI-to-PLC handshaking by simulating abnormal behavior deliberately, then proving that the PLC logic rejects or contains it. Validation is not showing that the happy path works. Validation is showing that bad inputs fail safely and observably.

This is where Simulation-Ready needs a strict definition. A Simulation-Ready engineer is one who can prove, observe, diagnose, and harden control logic against realistic process behavior before it reaches a live process. That is a higher standard than knowing ladder syntax. It is the difference between drawing logic and commissioning it.

What should be tested before hardware exposure

At minimum, engineers should test:

  • lost heartbeat from the AI service
  • delayed updates and stale values
  • out-of-range setpoints
  • implausible but in-range values
  • rapid oscillating requests
  • mode mismatch between AI request and machine state
  • bad startup sequence timing
  • fallback behavior after comms restoration

If those cases have not been exercised, the architecture is not validated. It is merely hopeful.

How can engineers validate AI-to-PLC handshaking in OLLA Lab?

OLLA Lab is useful here as a bounded validation environment for rehearsing the PLC side of AI integration risk. It is not an AI safety certifier, and it is not a substitute for formal site acceptance, hazard review, or functional safety assessment. It is a place to practice the exact control-hardening tasks that are too risky or too expensive to improvise on live equipment.

The practical advantage is simple: engineers can inject bad behavior safely and repeatedly.

What OLLA Lab allows engineers to rehearse

Using the web-based ladder logic editor, simulation mode, and variables panel, engineers can:

  • build ladder logic that supervises an external requested value,
  • toggle inputs and internal tags in real time,
  • observe outputs and intermediate permissives,
  • test timers, counters, comparators, math, and PID-related behavior,
  • compare ladder state against simulated equipment response,
  • and revise the logic after observing a fault path.

That workflow matters because AI integration failures often show up as timing and state problems, not just wrong numbers. OLLA Lab gives those problems somewhere to become visible.

A practical validation workflow in OLLA Lab

A credible rehearsal sequence looks like this:

  • Build a rung sequence that receives an external requested speed, flow, or position value
  • Add permissives, mode checks, and a final execution condition
  • Insert a watchdog timer
  • Add a clamp using limit logic
  • Add a rate limiter or stepwise ramp
  • Define fallback behavior on timeout or invalid data
  • Use the variables panel to force out-of-range values
  • Pause or stop the heartbeat signal
  • Introduce abrupt command changes
  • Switch process state mid-command
  • Confirm that outputs remain bounded
  • Verify that timeout logic trips correctly
  • Check that alarms or fault bits become visible
  • Compare simulated machine behavior to expected control philosophy
  • Adjust timer values, limits, and sequence conditions
  • Re-test the same faults until the rejection path is deterministic and legible
  1. Create the control path
  2. Add defensive logic
  3. Inject abnormal cases
  4. Observe cause and effect
  5. Revise and re-run

This is where OLLA Lab becomes operationally useful. It lets engineers rehearse veto logic, not merely admire it.

What engineering evidence should you produce to show this skill?

You should produce a compact body of engineering evidence that demonstrates control reasoning under fault, not a gallery of editor screenshots. Screenshots prove that a screen existed. They do not prove that the logic held under stress.

Use this structure:

State the exact acceptance criteria: permitted operating range, timeout interval, permissive conditions, fallback state, and expected alarm behavior.

Document the abnormal condition introduced: stale heartbeat, overspeed request, mode mismatch, oscillating command, or invalid sequence timing.

Explain what changed in the logic: added clamp, changed watchdog interval, inserted interlock, added rate-of-change limit, or revised sequence gating.

  1. System Description Describe the machine or process cell, the control objective, the AI-provided input, and the PLC-owned safety or validation role.
  2. Operational definition of “correct”
  3. Ladder logic and simulated equipment state Show the relevant rungs, tags, and the simulated machine or process state that those rungs govern.
  4. The injected fault case
  5. The revision made
  6. Lessons learned State what the failure revealed about the control philosophy, machine assumptions, or data validity path.

That evidence structure is far more persuasive than a polished demo clip. Commissioning risk responds to proof, not aesthetics.

Where does AI actually belong in an industrial automation stack?

AI belongs upstream of deterministic control, not in place of it. Its useful role is to generate advisory or candidate values from complex data that conventional logic handles poorly.

Examples include:

  • vision-based classification
  • anomaly detection
  • quality estimation
  • predictive maintenance scoring
  • multivariable optimization recommendations
  • adaptive setpoint suggestions

The PLC then decides whether those outputs are admissible in the current machine context.

A clean architectural rule

A practical rule is this: AI may recommend; the PLC must authorize.

That rule preserves the strengths of both layers:

  • AI handles ambiguity, pattern recognition, and optimization
  • PLC logic handles timing, permissives, limits, and deterministic execution

The result is not glamorous, which is often a good sign in controls. Good plant architecture usually looks slightly boring right up until it prevents an expensive mistake.

What are the common design mistakes when combining AI and PLC control?

The most common mistake is allowing AI outputs to bypass explicit validation logic. Once that happens, the architecture has already lost its boundary discipline.

Other recurring mistakes include:

  • treating a network timestamp as equivalent to deterministic freshness
  • assuming in-range values are automatically safe
  • forgetting sequence-state validation
  • omitting fallback behavior on heartbeat loss
  • allowing advisory outputs to become direct commands over time
  • testing only nominal behavior and not abnormal transitions
  • confusing simulator success with site readiness

That last one deserves emphasis. Simulation is a validation environment, not a declaration of field competence. It is where engineers learn to observe, diagnose, and harden logic before live exposure. Useful, necessary, and still not the same as standing beside a running line at 2:00 a.m. with maintenance waiting.

Conclusion

The safe way to integrate AI into industrial automation is to separate perception from authority. AI can classify, estimate, and recommend. The PLC must remain the hard real-time core that validates, clamps, sequences, and vetoes.

That is the “Medulla Oblongata” architecture in one line: the AI thinks, the PLC keeps the organism alive.

For engineers, the practical task is not to celebrate AI outputs but to harden the control path around them. Watchdogs, limits, permissives, rate-of-change checks, stale-data detection, and safe-state fallbacks are not optional details. They are the architecture.

And if that sounds less futuristic than the marketing decks, good. Machines generally prefer adults.

Keep exploring

Related Reading

References

Editorial transparency

This blog post was written by a human, with all core structure, content, and original ideas created by the author. However, this post includes text refined with the assistance of ChatGPT and Gemini. AI support was used exclusively for correcting grammar and syntax, and for translating the original English text into Spanish, French, Estonian, Chinese, Russian, Portuguese, German, and Italian. The final content was critically reviewed, edited, and validated by the author, who retains full responsibility for its accuracy.

About the Author:PhD. Jose NERI, Lead Engineer at Ampergon Vallis

Fact-Check: Technical validity confirmed on 2026-03-23 by the Ampergon Vallis Lab QA Team.

Ready for implementation

Use simulation-backed workflows to turn these insights into measurable plant outcomes.

© 2026 Ampergon Vallis. All rights reserved.
|