PLC Engineering

Article playbook

How to Validate PLC Logic Using Software-in-the-Loop (SITL) and OLLA Lab Digital Twins

Learn how SITL testing with OLLA Lab digital twins can help validate PLC sequencing, timing, interlocks, and fault handling before physical commissioning, while keeping safety and commissioning limits clear.

Direct answer

Software-in-the-Loop (SITL) testing in industrial automation is the execution of PLC control logic against a software model of equipment behavior rather than physical hardware. In OLLA Lab, ladder logic can be exercised against 3D digital twins to verify sequence timing, interlocks, abnormal-state behavior, and fault recovery before live commissioning.

What this article answers

Article summary

Software-in-the-Loop (SITL) testing in industrial automation is the execution of PLC control logic against a software model of equipment behavior rather than physical hardware. In OLLA Lab, ladder logic can be exercised against 3D digital twins to verify sequence timing, interlocks, abnormal-state behavior, and fault recovery before live commissioning.

Syntactically correct ladder logic is not the same thing as deployable control logic. A compiler can confirm instruction validity, tag consistency, and basic execution order; it cannot tell you whether a conveyor indexes into an extended cylinder, whether a restart sequence re-energizes hazardous motion, or whether a sensor arrives too late to protect the mechanism. Syntax is cheap. Commissioning mistakes are not.

A useful definition of simulation-ready is operational, not aspirational: an engineer is simulation-ready when they can prove, observe, diagnose, and harden control logic against realistic process behavior before it reaches a live process.

In an internal Ampergon Vallis analysis of 1,200 simulated commissioning scenarios run in OLLA Lab, users who validated logic against a 3D digital twin identified 84% of modeled mechanical race conditions before first physical execution. Methodology: sample size = 1,200 scenario runs across preset and custom labs; task definition = detection of modeled race conditions such as overlapping actuation and indexing conflicts; baseline comparator = logic review and compile-valid state without digital twin execution; time window = January 2025 to February 2026. This supports the claim that simulation can expose fault classes ordinary syntax checks miss. It does not prove field reliability, operator competence, or safety certification.

What is the difference between SITL and physical PLC commissioning?

SITL, HIL, and physical commissioning answer different validation questions. Treating them as interchangeable is a reliable way to miss risk.

Under the virtual commissioning framework described in VDI 3693, Software-in-the-Loop (SITL) means the controller logic and plant behavior are both represented in software, with no requirement for the physical PLC, field wiring, or machine hardware to be present. The point is to validate control behavior against simulated process response in a risk-contained environment.

Hardware-in-the-Loop (HIL) moves one layer closer to reality. The plant remains simulated, but the actual controller hardware is introduced. This tests hardware timing, I/O handling, and some platform-specific behavior that SITL cannot fully reproduce.

Physical commissioning is the full stack: control logic, physical PLC, wiring, instrumentation, actuators, machine dynamics, and the surprises that appear when all of those meet during startup.

### Comparison: SITL vs HIL vs physical commissioning

| Validation Mode | What is Real | What is Simulated | Primary Purpose | Risk Level | |---|---|---|---|---| | SITL | Control logic execution environment | Plant/equipment behavior | Validate sequence logic, interlocks, timing assumptions, state transitions, fault handling | Low | | HIL | Physical PLC/controller hardware | Plant/equipment behavior | Validate controller-specific execution, I/O behavior, hardware timing, integration assumptions | Medium | | Physical Commissioning | PLC, wiring, sensors, actuators, machine/process | Little or none | Validate the deployed system under actual operating conditions | High |

What SITL is good at

  • Verifying sequence order and permissive logic
  • Testing alarm and trip behavior
  • Exercising restart and recovery logic
  • Exposing race conditions between commands and feedbacks
  • Rehearsing abnormal states without risking equipment

What SITL does not replace

  • Site acceptance testing
  • Loop checks and wiring verification
  • Functional safety validation
  • SIL determination or compliance demonstration
  • Operator training on the exact installed asset unless the model scope supports it

That boundary matters. A digital twin is useful because it narrows uncertainty, not because it removes it.

Why does syntactically correct ladder logic fail in the field?

Ladder logic fails in the field because physical systems are not Boolean diagrams. They have delay, inertia, friction, drift, and failure modes that a compiler does not model.

A compile-valid rung can still command an impossible sequence. It can also command a possible sequence at the wrong time, which is often worse because it fails intermittently.

The three physical realities compilers ignore

  1. Mechanical inertia A stop command does not produce an instantaneous stop. Motors coast, conveyors overrun, and suspended loads keep moving. The logic may be correct at scan level and still wrong at machine level.
  2. Sensor latency Real sensors have response time, mounting tolerance, bounce, and filtering. A photoeye or limit switch that changes state a few milliseconds later than expected can invalidate an otherwise elegant sequence.
  3. Actuator stiction and process delay Pneumatic cylinders need pressure buildup. Valves may stick before moving. Pumps do not create stable flow the instant a motor bit turns on. The ladder diagram does not care; the process does.

The “looks right” fallacy

“Looks right” usually means “passes a visual review under ideal assumptions.” That is not the same as proving the sequence survives realistic timing and fault conditions.

Consider a sorting conveyor with a pusher cylinder:

  • The logic commands conveyor stop.
  • The logic commands cylinder extend.
  • The logic waits for extended confirmation.
  • The logic restarts conveyor after product diversion.

On paper, this is tidy. In a simulated machine, the conveyor may still be coasting when the cylinder enters the lane. If the sequence depends on instantaneous stop, the mechanism collides even though every rung is legal and every tag name is correct. The compiler will not object. Physics usually will.

How should “digital twin” be defined for PLC validation?

In this article, a digital twin is not a branding synonym for 3D graphics. It is a software model that exchanges state with control logic in a deterministic validation loop.

Operationally, a PLC validation digital twin is:

> A kinematic and discrete-event software model that consumes PLC outputs, applies simulated physical constraints such as motion delay, gravity, friction, and state-dependent timing, and returns deterministic sensor and process inputs back to the control logic.

That definition is intentionally narrow. It excludes decorative visualization that does not participate in control-state exchange.

A useful digital twin for controls work must do four things

Example: motor run commands, valve open commands, cylinder extend bits, analog setpoints.

  • Consume controller outputs

Example: acceleration, deceleration, dwell time, travel time, pressure delay, level change, or process lag.

  • Apply modeled equipment behavior

Example: prox switches, photoeyes, limit switches, analog PVs, alarm states, proof feedbacks.

  • Return simulated inputs to the logic

The same test case should be reproducible enough to diagnose logic changes and compare revisions.

  • Preserve deterministic test conditions

This is the difference between a video and a validation environment. One is illustrative. The other can veto bad control logic.

How does OLLA Lab bind PLC tags to a 3D digital twin?

OLLA Lab becomes operationally useful when the ladder program and the simulated equipment share observable state. The platform is not just a ladder editor with a scene beside it; the value is in binding logic variables to machine behavior and then watching the loop close.

In OLLA Lab, users build ladder logic in a web-based editor, execute it in simulation mode, and inspect or manipulate variables through the variables panel. The platform supports Boolean, analog, timer, comparator, math, and PID-oriented learning workflows, along with 3D/WebXR simulation scenarios. Within that workflow, tags can be associated with simulated equipment states so that command bits drive the model and model events return feedback into the logic.

Practical tag-binding workflow in OLLA Lab

A typical validation setup looks like this:

  • Define the command tags in ladder logic
  • `Conveyor_Run_CMD`
  • `Cylinder_Extend_CMD`
  • `Reset_Fault_CMD`
  • Define the feedback and sensor tags
  • `Conveyor_At_Speed`
  • `Cylinder_Extended_LS`
  • `Photoeye_PE1`
  • `Jam_Fault`
  • Bind command tags to digital twin behaviors
  • `Conveyor_Run_CMD` drives conveyor motion state
  • `Cylinder_Extend_CMD` drives actuator extension sequence
  • Bind simulated equipment responses back to tags
  • Conveyor motion updates `Conveyor_At_Speed`
  • Virtual limit switch updates `Cylinder_Extended_LS`
  • Virtual raycast or object detection updates `Photoeye_PE1`
  • Run the sequence and inspect state transitions
  • Toggle inputs
  • Pause, run, or stop simulation
  • Observe tag changes, timers, analog values, and fault states

What this gives the engineer

  • A visible cause-and-effect chain between rung logic and machine response
  • A place to test whether interlocks are actually sufficient
  • A way to inspect timing mismatches between command and proof
  • A safe environment to inject faults that would be expensive or dangerous on live equipment

This is where OLLA Lab fits credibly: as a risk-contained rehearsal environment for validation and troubleshooting practice. It does not replace field commissioning, but it can let engineers rehearse parts of commissioning that are too destructive, too expensive, or too disruptive to learn on a live line.

What are the most critical fault scenarios to simulate before deployment?

The most valuable SITL tests are not nominal sequences. They are abnormal-state tests. Almost any control strategy looks competent when every sensor behaves and every actuator arrives on time.

Mandatory SITL test cases

Trigger an emergency stop while motion is active and the mechanism is carrying or pushing material. Verify:

  • hazardous motion de-energizes as intended,
  • state memory behaves predictably,
  • restart requires deliberate operator action,
  • no hidden auto-resume path exists.

Force a normally closed or normally open limit device into the failed state during motion. Verify:

  • fault detection occurs within the expected window,
  • motion is inhibited or stopped safely,
  • alarm text and fault bits are unambiguous,
  • reset conditions are deliberate and bounded.

Simulate loss of control power or execution interruption. Verify:

  • outputs return to safe defaults,
  • startup logic does not auto-restart hazardous motion,
  • retained states do not create impossible sequence assumptions,
  • operator acknowledgement is required where appropriate.

Command a movement and withhold expected feedback. Verify:

  • timeout logic trips,
  • fault latches correctly,
  • downstream motion is blocked,
  • recovery path is explicit.

Introduce timing overlap between adjacent machine states. Verify:

  • mutually exclusive actions remain exclusive,
  • one state cannot pre-empt another without the required proof,
  • scan-order assumptions are not masking a sequencing defect.

Inject process disturbances or unrealistic sensor values. Verify:

  • alarms activate at defined thresholds,
  • control output behaves within expected bounds,
  • bumpless transfer or mode changes are handled cleanly,
  • trips and permissives remain coherent under analog upset.
  1. Asynchronous E-stop under load
  2. Sensor failure and failsafe verification
  3. Power-cycle recovery
  4. Mechanical timeout and no-proof conditions
  5. Sequence race conditions
  6. Analog excursion and PID disturbance

A practical misconception worth correcting

Testing only the happy path is not validation. It is demonstration. Real commissioning risk sits in transitions, delays, and failures.

What ladder logic pattern helps catch mechanical timeout faults?

A timeout pattern is one of the simplest defensive structures that gains real value in SITL. It converts “the actuator should have moved by now” into an observable fault condition.

Below is a compact example for a cylinder extend timeout. The exact syntax varies by platform, but the control intent is standard.

Language: Ladder Diagram

// Cylinder Actuation Timeout Fault Logic |---[ ]-----------[/]-----------[/]-----------------(TON)---| CMD_Extend Limit_Retract Limit_Extend Fault_Delay

|---[ ]---------------------------------------------( )-----| Fault_Delay.DN Fault_Cyl_Ext_Timeout

What this rung is doing

  • `CMD_Extend` starts the timing condition when extension is commanded.
  • `Limit_Retract` not made indicates the cylinder is no longer safely home, depending on device philosophy.
  • `Limit_Extend` not made means extension proof has not yet arrived.
  • `Fault_Delay` times the allowed travel window.
  • When the timer completes without extension proof, `Fault_Cyl_Ext_Timeout` is set.

Why SITL matters here

In a static logic review, this rung appears straightforward. In a digital twin, you can test whether the timeout is:

  • too short for realistic actuator travel,
  • too long to protect the mechanism,
  • incorrectly reset by sequence transitions,
  • blind to partial motion or jam conditions.

That is the difference between writing a timeout and validating one.

How should an engineer document simulation evidence instead of posting screenshots?

Engineering evidence should show reasoning, not just interface familiarity. A screenshot gallery proves that software was opened. It proves very little else.

If the goal is to demonstrate serious control work, document each simulated exercise using this structure:

Required evidence structure

Example: “The conveyor must not restart until the diverter cylinder is fully retracted and product presence is cleared.”

Example: “Cylinder extend command issued while conveyor coast time remained 1.8 s.”

Example: “Added conveyor zero-speed permissive and extension timeout fault.”

  1. System Description Define the machine or process cell, major actuators, sensors, and operating objective.
  2. Operational definition of “correct” State what must be true for the sequence to be considered correct.
  3. Ladder logic and simulated equipment state Show the relevant rungs, tag definitions, and corresponding digital twin states or feedbacks.
  4. The injected fault case State exactly what was forced or disturbed.
  5. The revision made Document the logic change.
  6. Lessons learned Explain what assumption failed and how the revised logic hardens the sequence.

This structure is useful because it captures control intent, fault model, and corrective reasoning. That is the material employers and reviewers can actually interrogate. Screenshots alone are mostly decorative.

What does OLLA Lab contribute to a simulation-ready workflow?

OLLA Lab supports a simulation-ready workflow by combining ladder authoring, simulation, variable inspection, scenario context, and digital twin interaction in one web-based environment. The benefit is not convenience for its own sake; it is reduced context switching during validation.

Within the bounded product facts provided, OLLA Lab offers:

  • A web-based ladder logic editor with contacts, coils, timers, counters, comparators, math functions, logic operations, and PID instructions
  • Simulation mode for running logic, toggling inputs, and observing outputs without physical hardware
  • A variables panel for monitoring tags, I/O, analog values, PID-related variables, and scenario state
  • 3D/WebXR/VR simulations that connect control logic to equipment behavior
  • Digital twin validation workflows for testing logic against realistic machine models
  • Scenario-based labs across manufacturing, water/wastewater, HVAC, chemical, pharma, warehousing, food and beverage, and utilities
  • Guided build instructions with objectives, I/O mapping, control philosophy, interlocks, and verification steps
  • AI guidance through GeniAI, positioned as lab coaching and corrective support inside the learning workflow

The bounded claim

OLLA Lab can help engineers rehearse validation tasks that are difficult to stage safely on live systems:

  • tracing I/O cause-and-effect,
  • testing interlocks,
  • observing fault behavior,
  • revising logic after abnormal-state failure,
  • comparing ladder state against simulated equipment state.

It should not be framed as a substitute for field experience, formal functional safety work, or site-specific commissioning authority. A simulator can expose bad assumptions early. It cannot sign off a plant.

How does SITL relate to standards, safety, and commissioning risk?

SITL can improve commissioning quality by shifting defect discovery earlier, but it does not by itself establish safety compliance. That distinction is central.

What SITL can support

  • Earlier discovery of sequencing defects
  • Better test coverage of abnormal states
  • Safer rehearsal of fault handling
  • More disciplined commissioning preparation
  • Improved communication between controls, mechanical, and process teams

What still requires separate treatment

  • Functional safety lifecycle activities under IEC 61508
  • Safety instrumented function design and verification
  • Site-specific risk assessment
  • Hardware fault tolerance analysis
  • Proof testing and installed-system validation

Industry literature on virtual commissioning and cyber-physical simulation generally supports the value of earlier behavior validation, especially for sequence-heavy and mechatronic systems. The recurring result is not that simulation removes commissioning risk. It is that simulation moves a meaningful portion of defect discovery into a cheaper and safer phase of the project. That is a more modest claim, and also the more credible one.

What should a good first SITL validation exercise look like?

Start with a compact sequence that contains motion, feedback, and one abnormal-state branch. If the first exercise is too simple, it teaches syntax but not judgment.

A good starter case in OLLA Lab is a conveyor-and-diverter or pump lead/lag scenario with:

  • one command path,
  • one proof feedback,
  • one timeout,
  • one alarm,
  • one restart condition,
  • one injected fault.

That gives enough structure to test causality without disappearing into architecture. The point is to learn whether the logic survives contact with a modeled process, not to build a large system on day one.

Keep exploring

Interlinking

References

Editorial transparency

This blog post was written by a human, with all core structure, content, and original ideas created by the author. However, this post includes text refined with the assistance of ChatGPT and Gemini. AI support was used exclusively for correcting grammar and syntax, and for translating the original English text into Spanish, French, Estonian, Chinese, Russian, Portuguese, German, and Italian. The final content was critically reviewed, edited, and validated by the author, who retains full responsibility for its accuracy.

About the Author:PhD. Jose NERI, Lead Engineer at Ampergon Vallis

Fact-Check: Technical validity confirmed on 2026-03-23 by the Ampergon Vallis Lab QA Team.

Ready for implementation

Use simulation-backed workflows to turn these insights into measurable plant outcomes.

© 2026 Ampergon Vallis. All rights reserved.
|