AI Industrial Automation

Article playbook

How to Diagnose Double-Coil Syndrome in PLC Logic and Why AI Misses Scan Cycles

Double-coil syndrome happens when multiple rungs write to the same PLC output, causing deterministic overwrites during the scan cycle. This article explains the fault, why generic AI often produces it, and how to validate logic in OLLA Lab.

Direct answer

Double-coil syndrome occurs when a PLC program writes to the same output address in multiple rungs, causing the last evaluated rung to overwrite earlier logic during the scan cycle. Because generic AI assistants often ignore PLC execution order and deferred output updates, simulation-based validation is needed to detect and correct these deterministic overwrite faults.

What this article answers

Article summary

Double-coil syndrome occurs when a PLC program writes to the same output address in multiple rungs, causing the last evaluated rung to overwrite earlier logic during the scan cycle. Because generic AI assistants often ignore PLC execution order and deferred output updates, simulation-based validation is needed to detect and correct these deterministic overwrite faults.

A common misconception is that double-coil behavior is a race condition. In most PLCs, it is not. It is a deterministic overwrite caused by writing the same addressed bit in multiple places and then forgetting that the controller resolves state in scan order, not by programmer intention.

In a recent Ampergon Vallis Lab benchmark, 14% of 500 AI-generated ladder logic scripts for a standard conveyor sorting task contained duplicate output-coil addressing that produced destructive overwrites [Methodology: n=500 generated scripts for one conveyor sorting scenario, compared against a human-reviewed single-coil baseline pattern, collected during internal testing in Q1 2026]. This supports a narrow claim: generic AI frequently emits scan-cycle-invalid output patterns in bounded ladder tasks. It does not support a claim about all AI tools, all PLC dialects, or all automation workloads.

What is the PLC scan cycle, and why does it break AI logic?

The PLC scan cycle is a deterministic execution model in which physical I/O updates are separated from logic evaluation. That separation is the core issue.

Under the IEC 61131-3 programming model, ladder logic is evaluated in a repeatable sequence. Exact timing varies by controller and task configuration, but the core pattern is stable enough to matter operationally:

- Read inputs: the controller copies physical input states into memory. - Execute logic: rungs are solved in order, typically top-to-bottom and left-to-right. - Write outputs: the final memory image is pushed to physical output terminals.

The important distinction is simple: internal state may change during logic execution, but physical outputs are generally updated after execution completes. If two rungs write to the same output bit, the later rung wins for that scan.

The 3-phase execution sequence is not a style preference

The scan cycle is not merely a teaching model. It is the basis for deterministic control behavior, troubleshooting, and commissioning judgment.

That matters because generic LLMs are trained largely on asynchronous or event-driven software patterns. In those environments, a statement often implies immediate effect, callbacks may fire independently, and concurrency bugs arise from timing interactions between threads, tasks, or processes. PLC ladder logic does not usually work that way in the ordinary single-task case.

Why generic AI defaults to the wrong mental model

Generic AI assistants tend to treat ladder instructions as if they were event handlers. They infer: if this condition becomes true, energize that output now. That is an IT-shaped assumption applied to an OT-shaped machine.

In a PLC, the output coil is usually better understood as a write to a memory location that will later participate in the output image update. Once that distinction is missed, duplicate coils can look harmless to the model. They are not harmless. They are deferred contradictions.

Is double-coil syndrome actually a race condition?

No. In standard PLC ladder execution, double-coil syndrome is usually a deterministic overwrite, not a race condition.

A race condition in IT refers to a timing-dependent fault between concurrent operations, where the result depends on which thread or process reaches shared state first. That definition is useful in software engineering, but it is often misapplied in controls.

A double-coil fault in a typical PLC scan is different:

  • The controller executes in a defined order.
  • The same addressed bit is written more than once.
  • The last write in execution order determines the final state for that scan.
  • The outcome is repeatable unless tasking, jumps, or asynchronous services complicate the program structure.

The correct contrast is overwrite versus concurrency

Use this distinction when reviewing AI-generated ladder logic:

- IT race condition: timing anomaly between concurrent operations - OT overwrite fault: deterministic last-rung-wins state resolution

That distinction matters because the fix is different. Deterministic overwrite faults are solved by consolidating output authority, not by applying thread-safety concepts.

How does double-coil syndrome manifest on real equipment?

Double-coil syndrome appears as a mismatch between visible logic conditions and actual machine behavior. Part of the program can look correct while the plant does something else.

Common symptoms include:

- The “dead” rung: a rung is true, highlighted, and apparently commanding a motor or valve, but the device does not actuate because a later rung writes the same output false. - State divergence between logic and equipment: an HMI command is accepted, an internal permissive appears satisfied, but the physical output remains off after the scan resolves. - Intermittent chatter or stutter: in more complex structures involving jumps, sequencers, or poorly managed intermediate bits, repeated overwrites can contribute to unstable device behavior or relay wear. - Commissioning confusion: a technician proves the field wiring, the I/O card, and the contactor path, yet the load still refuses to respond consistently. The fault is in state ownership, not copper.

Why this matters more during commissioning than during coding

Commissioning exposes state conflicts quickly because the process answers back. A duplicate coil in a code review is a bad pattern. A duplicate coil on a live pump permissive can become a false no-start, a nuisance trip, or a wasted shift while the panel is blamed first.

This is why syntax and deployability are not the same thing.

What causes AI-generated double-coil errors so often?

AI-generated double-coil errors usually come from local pattern completion rather than whole-program state design. The model sees two legitimate conditions that should energize the same actuator and emits two separate output coils instead of one consolidated command path.

Typical causes include:

- Condition-by-condition code generation: the model writes one rung per requirement without reconciling shared output ownership. - Weak scan-cycle awareness: the model does not reason reliably about deferred output updates. - Transfer of IT idioms into OT logic: it treats outputs as immediate actions rather than memory-backed state assignments. - Poor distinction between command bits and physical outputs: AI will often write directly to a hardware output where a human engineer would first build an internal command bit, then apply interlocks and final output mapping in one place.

The deeper issue is architectural, not grammatical

A ladder diagram can look syntactically valid and still be operationally wrong. AI is often competent at instruction vocabulary and weaker at authority structure.

A useful review question is: who owns this output bit? If the answer is several rungs, depending on what the prompt asked for, the program is already drifting toward trouble.

How do you fix AI-generated double-coil errors correctly?

The correct fix is to ensure that each physical output has a single, deliberate point of state resolution, or an explicitly managed latch/unlatch pattern where that behavior is justified and reviewable.

### Pattern 1: Consolidate conditions into one output rung

This is the simplest and usually the best correction.

AI error: duplicate output coil

Rung 1: [ Sensor_A ] --------------------( Motor_Run ) Rung 2: [ Sensor_B ] --------------------( Motor_Run ) // Overwrites Rung 1

Human fix: parallel branch to one output coil

Rung 1: [ Sensor_A ] --+-----------------( Motor_Run ) | [ Sensor_B ] --+

This preserves one point of authority for `Motor_Run`. Multiple valid start conditions are combined before the write occurs.

### Pattern 2: Use an internal command bit, then map to the output once

This pattern is often cleaner in real projects because it separates process logic from hardware actuation.

Rung 1: [ Sensor_A ] --+-----------------( Motor_Run_Cmd ) | [ Sensor_B ] --+

Rung 2: [ Motor_Run_Cmd ] [ All_Permissives_OK ] ----( Motor_Run_Output )

This structure improves reviewability and fault tracing:

  • command intent is visible,
  • interlocks are centralized,
  • physical output mapping is singular,
  • simulation becomes easier to interpret.

### Pattern 3: Use latch/unlatch only when the state model requires it

An `OTL/OTU` pair or equivalent set/reset structure can be correct when the process requires retained command state, sequence memory, or operator acknowledgement behavior. It is not a generic patch for poor rung structure.

Use latch/unlatch when you can answer all of the following:

  • What event sets the state?
  • What event clears it?
  • What abnormal condition must force reset?
  • How will the retained state be validated during startup and restart?

If those answers are vague, the latch is probably not justified.

How do you diagnose double-coil syndrome step by step?

The fastest diagnosis is to trace output authority through one complete scan and verify whether the final bit state matches the earlier rung indications.

Use this sequence:

  1. Find every write to the addressed output bit. Search the program for all instances of the same coil address or mapped tag.
  2. Identify rung order. Determine which write executes last in the relevant task or routine.
  3. Check whether the output is physical or internal. Duplicate writes to internal bits are also dangerous, but duplicate writes to physical outputs are usually more urgent.
  4. Test the true/false combinations. Force or simulate each input condition and observe whether earlier true logic is later negated.
  5. Verify final output image behavior. Do not stop at rung highlighting. Confirm the resolved tag state after logic execution.
  6. Refactor to a single point of authority. Consolidate branches, use command bits, or redesign sequence ownership.

What to document as engineering evidence

If you are reviewing generated logic, build a compact body of evidence rather than a screenshot gallery. A practical structure is:

  1. System description
  2. Operational definition of correct behavior
  3. Ladder logic and simulated equipment state
  4. The injected fault case
  5. The revision made
  6. Lessons learned

This format shows reasoning, not just tool usage.

How can OLLA Lab catch destructive overwrites before commissioning?

OLLA Lab is useful here as a diagnostic sandbox because it lets engineers observe ladder state, I/O behavior, and simulated equipment response before any live hardware is involved.

In Simulation Mode, you can run the logic, toggle inputs, and watch outputs and variables change in real time. In the Variables Panel, you can inspect tag states, I/O values, analog behavior, and scenario conditions while the logic executes. That visibility is what exposes double-coil faults: one rung may appear valid, but the final bit state shows the later overwrite.

What “Simulation-Ready” means in this context

Simulation-Ready means the engineer can prove, observe, diagnose, and harden control logic against realistic process behavior before it reaches a live process.

Operationally, that includes the ability to:

  • trace cause and effect through the scan,
  • compare ladder state to simulated equipment state,
  • test abnormal conditions and permissive failures,
  • revise logic after a fault,
  • verify that final output authority is singular and intentional.

That definition is narrower than knowing ladder syntax and more useful than relying on confidence alone.

A practical OLLA Lab workflow for double-coil diagnosis

Use OLLA Lab this way:

  1. Load or build the ladder logic in the editor.
  2. Enter Simulation Mode.
  3. Toggle the relevant inputs one at a time and in combination.
  4. Watch the addressed output tag in the Variables Panel.
  5. Compare rung truth with final tag state.
  6. Observe simulated machine behavior against the resolved output.
  7. Refactor the logic to one output authority path.
  8. Retest the same fault case and document the correction.

OLLA Lab does not fix the program automatically. It provides a controlled place to catch state divergence before a real actuator, pump, conveyor, or valve is involved.

Where Yaga fits, and where it does not

GeniAI, OLLA Lab’s AI lab guide, can support onboarding, corrective guidance, and ladder-logic assistance inside the platform. In this context, its value is bounded: it can help point the learner toward reviewable logic issues and platform-specific validation steps.

It should not be treated as a substitute for engineering judgment, functional review, or site-specific approval. Generic AI can generate the fault; guided AI in a constrained validation environment can help surface it. Those are not the same thing.

Why is digital twin validation relevant to a scan-cycle error?

Digital twin validation matters because scan-cycle faults are not merely symbolic errors; they create mismatches between intended control state and observed machine behavior.

When ladder logic is tested against realistic machine models or process scenarios, the engineer can compare:

  • commanded state,
  • actual simulated equipment response,
  • alarm and permissive behavior,
  • fault handling under abnormal inputs.

That is the practical bridge from code correctness to commissioning judgment. A rung can be legal and still be wrong for the process. Digital twin validation helps expose that difference before the field does.

This aligns with a broader engineering literature base suggesting that simulation and digital-twin-assisted validation can improve fault discovery, operator understanding, and pre-commissioning verification when used with clear model boundaries and realistic test cases. The literature does not justify broad claims, but it supports the narrower proposition that realistic simulation is useful when the failure mode depends on dynamic system behavior rather than static syntax alone.

What should engineers review in AI-generated ladder logic before trusting it?

Engineers should review AI-generated ladder logic for state ownership, scan-order effects, permissive structure, and fault behavior before considering it deployable.

A practical review checklist:

  • Does each physical output have one clear point of authority?
  • Are command bits separated from hardware outputs where appropriate?
  • Are interlocks and trips centralized and reviewable?
  • Does the logic behave correctly across one full scan, not just one rung?
  • Can abnormal conditions be simulated and observed safely?
  • Is correct behavior defined in process terms, not only code terms?

That last point is often neglected. “The rung goes true” is not an operational definition of success. “The pump starts only when permissives are satisfied, stops on trip, alarms on failed proof, and remains stable through restart” is closer.

Keep exploring

Related reading

Conclusion

Double-coil syndrome is a deterministic PLC state-overwrite fault, not usually a race condition. Generic AI tends to produce it because it completes local logic patterns without reliably modeling scan-cycle state resolution and deferred output updates.

The fix is straightforward in principle and disciplined in practice: consolidate output authority, validate final tag state, and test the logic against realistic machine behavior before commissioning. OLLA Lab fits that workflow as a web-based ladder logic and digital twin simulator where engineers can observe, diagnose, and revise these faults safely. That is a credible role for a simulation environment and a practical way to distinguish plausible-looking code from logic that can survive contact with a process.

References

Editorial transparency

This blog post was written by a human, with all core structure, content, and original ideas created by the author. However, this post includes text refined with the assistance of ChatGPT and Gemini. AI support was used exclusively for correcting grammar and syntax, and for translating the original English text into Spanish, French, Estonian, Chinese, Russian, Portuguese, German, and Italian. The final content was critically reviewed, edited, and validated by the author, who retains full responsibility for its accuracy.

About the Author:PhD. Jose NERI, Lead Engineer at Ampergon Vallis

Fact-Check: Technical validity confirmed on 2026-03-23 by the Ampergon Vallis Lab QA Team.

Ready for implementation

Use simulation-backed workflows to turn these insights into measurable plant outcomes.

© 2026 Ampergon Vallis. All rights reserved.
|