What this article answers
Article summary
A PLC scan cycle is a deterministic loop in which the controller reads inputs, executes logic sequentially, writes outputs, and performs system tasks. OLLA Lab mimics this behavior in a browser-based simulation environment so engineers can observe scan-dependent faults, test logic revisions, and validate cause-and-effect before live commissioning.
A common misconception is that ladder logic behaves like ordinary software reacting instantly to events. It does not. A PLC usually polls reality on a repeating scan, evaluates logic in a defined order, and updates outputs after that evaluation is complete. That distinction is the difference between code that looks correct and code that survives a machine.
During a recent internal evaluation of OLLA Lab’s high-speed sorting scenario, 68% of junior users failed to capture a 10 ms optical sensor pulse when the simulated scan time was set to 20 ms [Methodology: n=41 users; task = detect and latch a transient photoeye pulse in a conveyor reject scenario; baseline comparator = successful pulse capture at 5 ms simulated scan; time window = Jan 15–Mar 10, 2026]. This is an internal Ampergon Vallis benchmark, not an industry prevalence claim. It supports one narrow point: scan timing is often poorly understood even when the rung logic appears syntactically correct.
That is exactly where OLLA Lab is useful. It provides a bounded software-in-the-loop environment for observing deterministic execution, I/O visibility, and scan-dependent failure modes before anyone learns the lesson on a live machine, where the cost of error is much higher.
What are the four phases of a standard PLC scan cycle?
A standard PLC scan cycle is a repeating, deterministic sequence with four functional phases. The exact implementation varies by vendor and task model, but the core pattern is stable across conventional cyclic execution.
The key engineering point is simple: the program usually does not read a fresh physical input at every instruction. It works from a memory image during execution, then updates outputs afterward.
- Input Scan (Read) The controller reads the current state of physical inputs and copies those states into memory, often called an input image table or process image.
- Program Execution (Logic) The controller executes the user program using the stored input states. In ladder logic, this is typically evaluated top-to-bottom and left-to-right within the active task or routine structure.
- Output Scan (Write) The controller writes the final calculated output states from memory to the physical output terminals.
- Housekeeping (Comms/Diagnostics) The controller services internal diagnostics, communications, timer updates, messaging, and other system tasks.
Why this matters in practice
Scan-based execution creates predictable but non-obvious behaviors:
- A short pulse can be missed if it occurs between scans.
- A coil written twice in one scan can be silently overwritten.
- An output that appears true in one rung may never energize physically if a later rung resets it before the output update phase.
- Timing assumptions that seem harmless on screen can fail against real process dynamics.
This is why knowing ladder syntax is not the same as being simulation-ready. Operationally, a simulation-ready engineer can prove, observe, diagnose, and harden control logic against realistic process behavior before it reaches a live process.
Why do asynchronous IT languages fail at deterministic control?
General-purpose IT languages are not inherently wrong for software. They are wrong for explaining PLC execution if the control model is ignored. The issue is not language prestige; it is execution semantics.
IT execution versus OT execution
| Feature | IT Languages (Python/JavaScript, broadly) | OT / PLC Execution (IEC 61131-3 context) | |---|---|---| | Primary trigger model | Event-driven, callback-driven, or scheduler-driven | Cyclic polling and deterministic task execution | | Memory relationship | Dynamic allocation is common | Predefined tags, structured memory, direct process mapping | | Hardware interaction | Usually abstracted through drivers/APIs | Direct relationship to I/O images, field states, and scan timing | | Execution timing | Often non-deterministic at application level | Designed for repeatable, bounded control execution | | Failure mode | Latency, race conditions, callback order issues | Missed pulses, overwrite logic, stale image assumptions | | Engineering priority | Throughput, flexibility, user interaction | Determinism, repeatability, safe machine behavior |
IEC 61131-3 defines the programming languages and execution concepts used in industrial control, including Ladder Diagram, Function Block Diagram, Structured Text, and Sequential Function Chart (IEC, 2013). In practice, PLC control depends on deterministic task behavior, explicit state handling, and predictable scan order. Web software often assumes the world can wait for the next event. Pumps, cylinders, and conveyors usually cannot.
The important contrast
The clean contrast is this: event response versus cyclic control.
That difference matters because physical automation is not just about computing a result. It is about computing the result at the right time, in the right order, against changing plant conditions.
How does sequential ladder execution actually work?
Sequential ladder execution means the controller evaluates logic in a defined order, not all at once. In a conventional scan, the program is solved rung by rung, from the top of the routine downward, and within each rung from left to right according to the platform’s execution rules.
Observable consequences of sequential execution
- Troubleshooting must distinguish between:
- Earlier logic can set an internal bit that later logic immediately uses.
- Later logic can overwrite a result established earlier in the same scan.
- Intermediate states may exist in memory during execution even though the physical output has not yet changed.
- tag state in memory
- physical output state at the terminal
That distinction is easy to miss in a classroom and difficult to ignore during commissioning.
IEC grounding
IEC 61131-3 provides the language framework, but vendor documentation and runtime architecture determine the exact task scheduling and update details. The safe statement is this: sequential evaluation and cyclic execution are foundational behaviors in mainstream PLC control systems, even though implementation details differ by controller family.
How does the “Double-Coil Syndrome” expose scan cycle logic errors?
Double-coil syndrome occurs when the same output or memory coil is written in more than one place, allowing a later instruction to override an earlier one during the same scan. The final state written in logic execution is the one that survives to the output update stage.
Here is the classic pattern:
[Language: Ladder Diagram]
RUNG 1: Start command sets Motor_Run true in memory. |---[ Start_PB ]-------------------------------------( Motor_Run )---|
RUNG 2: A later condition writes to the same coil. If this rung evaluates false in a way that de-energizes the same target, the earlier state is effectively overwritten before outputs are written. |---[ Some_Other_Condition ]-------------------------( Motor_Run )---|
What actually happens
- Result: the motor may never energize even though an earlier rung looked correct.
- The first rung writes `Motor_Run = TRUE` in memory.
- A later rung writes to the same target.
- The last write determines the final memory state at the end of execution.
- The physical output update occurs afterward.
This is deterministic execution doing exactly what the logic specifies, regardless of intent.
Why this fault is useful for training
Double-coil faults expose three core ideas quickly:
- order matters
- memory state is not the same as terminal state
- visual rung correctness is not enough
In OLLA Lab, this becomes observable rather than theoretical because users can run the logic, inspect variables, and compare ladder state against simulated equipment behavior.
How can a short input pulse be missed by the scan cycle?
A short pulse can be missed when its duration is shorter than the controller’s effective opportunity to sample it. If an input turns on and off between input scans, the CPU may never record the event in the input image.
### Example: pulse width versus scan time
If:
- a photoeye pulse lasts 10 ms, and
- the controller samples inputs every 20 ms,
then the pulse can occur entirely between scans and disappear from the program’s perspective.
This is a sampling problem. In control work it often appears as “the sensor definitely fired, but the PLC never saw it.” Both statements can be true.
Why engineers should care
Missed pulses affect:
- high-speed sorting
- encoder-adjacent logic
- reject confirmation
- bottle or carton counting
- intermittent proof signals
- edge-triggered alarms
The fix may involve faster tasks, hardware latching, pulse stretching, one-shots, high-speed counters, or revised sequence design. The correct answer depends on the process and the controller architecture.
How does OLLA Lab mimic physical CPU scanning in a browser?
OLLA Lab mimics physical CPU scanning by providing a structured simulation environment in which ladder logic is executed as a deterministic loop rather than as loose browser event reactions. More simply, it is designed to let users observe scan-dependent control behavior, not merely draw rungs.
What OLLA Lab does in bounded terms
Within the platform, users can:
- build ladder logic in a web-based editor
- run logic in simulation mode
- toggle and inspect inputs, outputs, and variables
- work through realistic industrial scenarios
- compare ladder behavior against 3D/WebXR/VR equipment views where available
- use GeniAI, the AI lab guide, for guided support
For this article’s scope, the important product fact is narrower: OLLA Lab provides a software-based environment for rehearsing deterministic logic execution and observing how scan timing affects machine behavior.
Observable behaviors the platform is suited to demonstrate
- Double-coil overwrite behavior
- Missed transient pulses
- Cause-and-effect tracing across I/O
- Sequence faults caused by stale assumptions about state
- Differences between ladder state and simulated equipment state
That makes it useful as a software-in-the-loop rehearsal environment for high-risk commissioning tasks. It does not replace hardware acceptance testing, safety validation, or site-specific commissioning. Those still belong to the real system, with the real controller, under the real constraints.
Why the browser delivery matters
A browser-based environment lowers setup friction for learning and review. More importantly, it allows repeated, low-risk fault injection without tying up a physical trainer or production-adjacent hardware.
What does “digital twin validation” mean in this context?
In this context, digital twin validation means testing ladder logic against a simulated machine or process model and checking whether the expected equipment behavior matches the control philosophy under normal and abnormal conditions.
That definition needs to stay grounded. It does not mean the simulation is a legally sufficient substitute for site acceptance, SIL verification, or plant commissioning.
### Operationally, digital twin validation includes:
- comparing command states to simulated equipment response
- verifying sequence order and permissives
- checking alarm and trip behavior
- observing analog or PID-driven state changes where modeled
- injecting faults and confirming the logic response
- revising the program and retesting
This is where OLLA Lab becomes operationally useful. It lets engineers test whether a sequence works on a realistic model before they test it on equipment that can jam, flood, overtravel, or otherwise fail in costly ways.
How can engineers practice scan-dependent fault handling in OLLA Lab?
Engineers should practice scan-dependent fault handling by building scenarios where the logic is forced to fail for timing reasons, then revising the design until the failure mode is controlled and explainable.
### A practical exercise: pulse catching in a conveyor scenario
Use a conveyor or sorting-style scenario and define a transient sensor event that is shorter than the simulated scan interval.
#### Step 1: Build the initial logic
Create logic that depends directly on a photoeye pulse to trigger an action such as reject, count, or divert.
#### Step 2: Set the simulated conditions
Use a scan interval that is longer than the pulse duration. Then run the scenario and observe whether the event is reliably captured.
#### Step 3: Inspect variables and equipment state
Use the variables panel to compare:
- input state
- internal memory bits
- output commands
- simulated equipment response
#### Step 4: Inject the fault case
Force a pulse that occurs too quickly for the current scan assumptions. Confirm that the logic misses it.
#### Step 5: Revise the logic
Possible revisions include:
- adding a latch or seal-in path
- using edge detection where appropriate
- stretching the pulse in logic
- redesigning the sequence to avoid dependence on a narrow transient
- documenting when hardware features would be required in a real controller
#### Step 6: Re-run and verify
Validate that the revised logic captures the event under the defined conditions and does not create false positives or unsafe persistence.
What engineering evidence should a learner produce instead of screenshots?
A screenshot gallery proves that software was opened. It does not prove engineering judgment. If a learner wants to demonstrate real control understanding, the artifact should show reasoning, fault handling, and revision discipline.
Use this structure:
State exactly what correct behavior means. Example: “A 10 ms product-detect pulse must be captured and latched so the reject sequence executes once and only once.”
Document the scan-dependent failure. Example: “Pulse was missed when scan time exceeded pulse width.”
- System Description Define the machine or process segment, the control objective, and the relevant I/O.
- Operational definition of correct behavior
- Ladder logic and simulated equipment state Show the relevant rungs, tags, and the corresponding simulated machine behavior.
- The injected fault case
- The revision made Explain what changed in the logic and why.
- Lessons learned Summarize the control principle, such as image-table timing, overwrite behavior, or why pulse capture requires explicit design.
That is the kind of evidence a reviewer can interrogate. It is also the kind that tends to hold up better in technical review than a screenshot alone.
What are the limits of scan-cycle simulation?
Scan-cycle simulation is valuable because it makes invisible controller behavior observable. Its limits matter just as much.
A bounded statement of what simulation can and cannot do
Simulation can help engineers:
- understand deterministic execution
- test sequence logic
- observe timing-related faults
- rehearse troubleshooting
- compare expected and simulated equipment behavior
Simulation cannot by itself:
- certify site competence
- replace hardware-specific validation
- establish functional safety compliance
- prove final fieldbus timing behavior
- substitute for FAT, SAT, loop checks, or live commissioning
This boundary is not a weakness. It is part of what keeps the tool credible.
How do standards and literature support simulation-based control training?
Simulation-based training and digital models are well established in industrial engineering, especially where direct experimentation on live assets is costly or unsafe. The literature generally supports simulation as a way to improve understanding of dynamic behavior, fault response, and operator or engineer preparedness, while also emphasizing that model fidelity and task design determine usefulness.
Relevant standards and technical grounding
- IEC 61131-3 defines PLC programming language frameworks and execution concepts relevant to ladder logic behavior (IEC, 2013).
- IEC 61508 establishes the broader functional safety framework and reinforces that safety claims require disciplined lifecycle evidence, not informal simulation confidence alone (IEC, 2010).
- exida and related functional safety guidance emphasize verification, validation, and lifecycle rigor in safety-related automation work.
- Research in industrial simulation, digital twins, and training environments has shown value in fault rehearsal, commissioning preparation, and human understanding of dynamic systems, particularly when the simulation is tied to observable process behavior rather than abstract instruction alone.
The careful conclusion is this: simulation is strongest when it is used to expose behavior, test assumptions, and improve pre-deployment judgment. It is weakest when treated as a shortcut around engineering evidence.
Conclusion: why scan-cycle literacy matters before commissioning
Scan-cycle literacy matters because deterministic control is not intuitive to people trained on event-driven software or static ladder examples. A PLC does not notice everything the moment it happens. It samples, solves, writes, and repeats.
That is why OLLA Lab has a legitimate place in the workflow. It gives engineers a bounded environment to observe scan order, inspect I/O state, inject faults, and revise logic before those same mistakes reach a live process. This is not about making simulation look impressive. It is about making failure visible while the cost of being wrong is still tolerable.
In practical terms, that is the move from syntax to deployability.
Keep exploring
Interlinking
Related link
Explore the Pillar hub →Related link
Related article 1 →Related link
Related article 2 →Related link
Related article 3 →Related link
Book a consultation with Ampergon Vallis →